diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcnzx" "b/data_all_eng_slimpj/shuffled/split2/finalzzcnzx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcnzx" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Conclusion and Limitation}\n\\label{sec:discussion}\n\n\\textbf{Limitations}\nIn our experiments, we only train and test our method on two tasks, which limits the scope of the proposed method. More dexterous manipulation tasks will be our future research direction. Another potential improvement of our method is to use Recurrent Neural Network and temporal information for policy networks. It will enable us to do long-horizon tasks.\n\n\\textbf{Conclusion}\nTo the best of our knowledge, our approach is the first work to train a dexterous manipulation reinforcement learning policy with point cloud inputs that can transfer to the real world. We justified that direct sim-to-real transfer is possible for two manipulation tasks with point cloud representation. \n\n\\acknowledgments{This work was supported, in part, by grants from NSF CCF-2112665 (TILOS), NSF 1730158 CI-New: Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI), NSF ACI-1541349 CC*DNI Pacific Research Platform, the Industrial Technology Innovation Program (20018112, Development of autonomous manipulation and gripping technology using imitation learning based on visualtactile sensing) funded by the Ministry of Trade, Industry and Energy of the Republic of Korea, and gifts from Meta, Google, Qualcomm. We also thanks Fanbo Xiang for suggestion on shader setting in SAPIEN, Ruihan Yang for helpful discussion on RL training.}\n\n\\section{Experiments}\n\\label{sec:exp}\n\n\\subsection{Experimental Setup}\n\\label{sec:exp_setup}\n\n\\begin{figure*}[t]\n \\begin{minipage}{0.20\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/bottle_exp.pdf} \\\\\n \\vspace{-1em}\n \\quad {(a) }\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.20\\linewidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/can_exp.pdf} \\\\\n \\vspace{-1em}\n \\quad {(b)}\n \\end{minipage}\n \\begin{minipage}{0.20\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/abl_bottle.pdf} \\\\\n \\vspace{-1em}\n \\quad {(c)}\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.20\\linewidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/abl_can.pdf}\\\\\n \\vspace{-1em}\n \\quad {(d)}\n \\end{minipage}%\n \\begin{minipage}{0.20\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/abl_door.pdf} \\\\\n \\vspace{-1em}\n \\quad {(e)}\n\t\\end{minipage}\n \n \\caption{\\small{\\textbf{Training Curves.} The left two plots show the single-object and multi-object training curve of (a) bottle category and (b) can category. The right three plots show the ablation results on the (c) grasping bottle (d) grasping can and (e) door opening. The x-axis is the training iterations and y-axis is the normalized episodic return. The shaded area indicates standard error and the performance is evaluated on five random seeds.}}\n \\label{fig:main_exp}\n\\end{figure*}\n\n\n\n\\begin{table}[t]\n\\renewcommand{\\arraystretch}{1.2}\n\\centering\n\\begin{tabular}{c|c|c|c|c|c}\n \\multirow{2}{*}{\\textbf{Settings}} & \\multicolumn{2}{c}{\\textbf{Bottle}} \\vline & \\multicolumn{2}{c}{\\textbf{Can}} \\\\ \n \n & Known Obj. & Novel Obj. & Known Obj. & Novel Obj. \\\\\\shline\n \n Single Obj. Training & $0.81\\pm0.09$ & $0.60\\pm0.06$ & $0.96\\pm0.04$ & $0.63\\pm0.18$ \\\\\n \n Multi Obj. Training & $\\textbf{0.83}\\pm\\textbf{0.16}$ & $\\textbf{0.81}\\pm\\textbf{0.15}$ & $\\textbf{0.93}\\pm\\textbf{0.07}$ & $\\textbf{0.68}\\pm\\textbf{0.09}$\\\\\n\\end{tabular}\n\\vspace{0.5em}\n\\caption{\\small{\\textbf{Experiment on Multi-object Training.} We evaluate the policy trained with single and multiple objects on bottle (upper two rows) and can (bottom two rows) categories with point cloud input. We test them on both known or novel objects. The success rate are reported on 5 seeds.}}\n\\vspace{-2em}\n\\label{tab:single_multi}\n\\end{table} \n\n\\textbf{Point Cloud Pre-processing:}\nTo enable smooth transfer from simulation to real-world, we apply the same data preprocessing procedure to the point cloud captured by the camera. It involves four steps: (i) Crop the point cloud to the work region with a manually-defined bounding-box; (ii) Down-sample the point cloud uniformly to $512$ points; (iii) Add a distance-dependent Gaussian noise to the simulated point cloud to improve the sim2real robustness; (iv) Transform point cloud from the camera frame to the robot base frame using camera pose. In simulation, we use the ground-truth camera pose with multiplicative noise for frame transformation. In the real-world, we perform hand-eye calibration to get the camera extrinsic parameters. \n\n\\textbf{Evaluation Criterion:}\nWe evaluate the performance of a policy by its success rate. For grasping tasks, a task is considered a success if $d_{ot} < 0.05m$ in simulation, where $d$ is the distance between object position and goal position. In real world, the task is considered as success if the XY position of the object is within 5cm from the target position and the height of the object is at least 15cm from table top. For the door opening, the task is considered as success if the door is opened to at least around 45 degrees.\n\n\\textbf{EigenGrasp Baseline:}\nWe choose the EigenGrasp~\\cite{ciocarlie2007dexterous} as the grasp representation. Given an object mesh model, we use the GraspIt~\\cite{miller2004graspit} to search valid grasp for Allegro Hand. Then, we use the RRTConnect~\\cite{kuffner2000rrt} motion planner implemented in OMPL~\\cite{sucan2012open} to plan a joint trajectory to the pre-grasp pose and then plan a screw motion from pre-grasp pose to the grasp pose. Finally, we close all fingers based on searched grasp pose and lift the object to the target. Note that different from our approach, the baseline method \\textbf{requires complete object model to search for grasps and ground-truth object pose} to align the grasp pose in the robot frame. To evaluate the performance of baseline on novel objects, we first build a grasp database on ShapeNet bottle and can categories using GraspIt. Given the sensory data of a new object, we search for the most similar objects in the dataset and use the query grasps for the novel object. Here we compare the performance of our method with baselines in the real-world. \n\n\\textbf{Training:}\nWe train RL in two settings for grasping: (i) training on a single object (ii) training on multiple objects jointly. For the single object grasping, we perform experiments on both ``tomato soup can'' and ``mustard bottle'' from YCB. For the multi-object training, we choose 10 objects from the can or bottle categories of ShapetNet. We randomly choose one object to train for each episode in the multi-object training. Here, we use can and bottle as experimental subjects since they represent two different basic grasping patterns~\\cite{napier1956prehensile} for anthropomorphic hand: precision grasp and power grasp. For door opening, we only train the policy on the door with fixed lever geometry.\n\n\\subsection{Comparison of Single-object and Multi-object Training}\nWe plot the training curve of RL in Figure~\\ref{fig:main_exp} (a) and (b). In general, our method can learn to grasp and move the object to the goal pose within 600 iterations, where each iteration contains 20K environment steps. Then we evaluate the policy trained on both known and novel objects, and the results are shown in Table~\\ref{tab:single_multi}. We run 100 trials to compute the average success rate. \nOn grasping known objects, we find that agent trained on single-object outperforms the agent trained on multi-object by a small margin, for both learning efficiency and final success rate. However, agent trained on multi-objects does much better at grasping novel objects. Our results suggest that using multiple object during training is important, and is of great importance for novel object generalization.\n\n\\subsection{Ablation Results in Simulation}\n\\vspace{-0.5em}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/real_expriment.pdf}\n \\caption{\\small{\\textbf{Real-experiment:} We evaluate our point cloud policy on various unseen objects.}}\n \\label{fig:real_expriment}\n \\vspace{-0.1in}\n\\end{figure}\n\n\\begin{table}[t]\n\t\\centering\n \\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{l|c|c|c|c|c|c}\n \\multirow{2}{*}{\\textbf{Settings}} & \\multicolumn{2}{c}{\\textbf{Bottle}} \\vline & \\multicolumn{2}{c}{\\textbf{Can}} \\vline& \\multicolumn{2}{c}{\\textbf{Door}}\\\\\n \n & Known Obj. & Novel Obj. & Known Obj. & Novel Obj. & Known Obj. & Novel Obj \\\\\\shline\n w\/o Imagined PC. & $0.60\\pm0.46$ & $0.56\\pm0.51$ & $0.91\\pm0.17$ & $0.63\\pm0.07$& $0.14\\pm0.28$ & $0.11\\pm0.27$ \\\\\n \n w\/o Contact Rew. & $0.00\\pm0.00$ & $0.00\\pm0.00$ & $0.06\\pm0.08$ & $0.03\\pm0.06$& $0.21\\pm0.26$ & $0.20\\pm0.25$\\\\\n \n w\/o Both & $0.00\\pm0.00$ & $0.00\\pm0.00$ & $0.00\\pm0.00$ & $0.00\\pm0.00$& $0.00\\pm0.00$ & $0.00\\pm0.00$\\\\\n \n Ours & $\\textbf{0.83}\\pm\\textbf{0.16}$ & $\\textbf{0.81}\\pm\\textbf{0.15}$ & $\\textbf{0.93}\\pm\\textbf{0.07}$ & $\\textbf{0.68}\\pm\\textbf{0.09}$&$\\textbf{0.92}\\pm\\textbf{0.06}$&$\\textbf{0.79}\\pm\\textbf{0.11}$\\\\\n \\end{tabular}\n }\n \\vspace{0.5em}\n \\caption{\\textbf{Ablation Study:} we investigate the influence of contact-based reward design and imaged point cloud. We evaluate the success rate on both known and novel objects under four settings: (i) without imaged point cloud; (ii) without contact reward; (iii) without both; (iv) with both. }\n \\label{tab:ablation}\n \n\\end{table}\\hfill\n\n\nWe ablate two key innovations of the work: the reward design with oracle contact and the imaged hand point cloud. We perform experiments on four different variants: (i) without imagined point cloud; (ii) without contact-based reward design; (iii) without both imagined point cloud and contact based reward design; (iv) our standard approach with both techniques. Note that the variant (iii) is an approximation of~\\citep{corl21-inhand} in our environments and tasks. We compare both the learning curve and the evaluation success rate of these four variants. For grasping task, Figure~\\ref{fig:main_exp} (c) and (d) show the results on bottle and can categories, and Table~\\ref{tab:ablation} shows the success rate. Our findings can be summarized as follows. \n\nFirst, we find that contact reward information is of vital importance for training the point cloud RL policy on the multi-finger robot hand. Without using contact reward, the agent can hardly learn anything (red and green curve in the figure) and get nearly zero success rate during evaluation for both bottle and can categories. \nBy encouraging the contact between fingers and object, the RL agent can avoid getting stuck in local minimums and learn meaningful manipulation behavior. \n\nSecond, the imagined point cloud can also improve the training and test performance for both categories, though it is not so important as the contact reward. As is shown in the learning curves of Figure~\\ref{fig:main_exp} (c) and (d), the policy utilizing the imagined point cloud as input can learn faster in the early stage of the training and show much smaller variances. An interesting fact is that imagined point cloud is more beneficial for the bottle category than the can category. One possible reason is that grasping a bottle requires multi-finger coordination to perform a power grasp. Such coordination is highly dependent on the detailed finger information provided by the imagined point cloud. The imagined point cloud can help the agent to better see the fingers even if they are occluded by the object, e.g., fingers behind the object. \n\nThe experiments on the door opening task also support these findings. As shown in Figure~\\ref{fig:main_exp} (e), the policy trained with contact based reward and imaged hand point cloud outperform other ablation methods, which also demonstrate the effectiveness of our two key design. Compared with the grasping experiments, we can observe larger variations during policy training. One possible reason is that door opening suffers from heavier occlusion than grasping, when the door lever is grasped by the robot hand, which influence the temporal consistence of PointNet feature.\n\n\\subsection{Real-World Evaluation}\n\n\\begin{table}[h]\n\\begin{minipage}[h]{0.6\\linewidth}\n\\vspace{0.4em}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|c|c|c}\n Method & Bottle & Can & Mixed Category\\\\ \\shline\n EigenGrasp Oracle & $0.66$ & $0.45$ & $N\/A$ \\\\\n EigenGrasp & $0.50$& $0.41$ & $N\/A$ \\\\\n Single Obj. Train & $0.75\\pm0.06$ & $0.75\\pm0.08$& $N\/A$ \\\\ \n Multi Obj. Train & $\\textbf{0.87}\\pm\\textbf{0.03}$ & $\\textbf{0.83}\\pm\\textbf{0.13}$ & $\\textbf{0.73}\\pm\\textbf{0.12}$\n\\end{tabular}\n}\n\\vspace{0.1em}\n\n\\caption{\\small{\\textbf{Real-World Grasping Experiment}: This experiment consist of 3 categories, which incorporate 26 objects: 10 bottles, 6 cans, and 10 other objects in multiple mixed categories.}}\n\\label{tab:real_grasping}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|c|c|c}\n Settings & Original Door & Novel Door 1 & Novel Door 2 \\\\ \\shline\n Single Door Train & $0.72\\pm0.07$ & $0.60\\pm0.03$ &$0.67\\pm0.01$\\\\\n\\end{tabular}\n}\n\\caption{\\small{\\textbf{Real-World Door Opening Experiment}: This experiment consist of 3 different doors, the first one on the left is used for training and the other two are for testing.}}\n\\label{tab:real_door_opening}\n\\end{minipage}\n\\begin{minipage}[h]{0.4\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/objects_door.pdf}\n\\end{minipage}\\\\\n\\vspace{1em}\n\\vspace{-2em}\n\\end{table}\n\n\n\nWe perform sim2real experiments to evaluate the performance of our method in the real world. As is shown in Figure~\\ref{fig:robot_setup}, we attached an Allegro hand onto a XArm-6 robot arm to grasp the object on the front table. We apply the same data-preprocessing steps for both simulated environment and real-world as mentioned in Sec.\\ref{sec:exp_setup}. \n\nThe task execution sequence is visualized on the bottom row of Figure~\\ref{fig:teaser}. Both Single Obj. Training and Multi Obj. Training will be evaluated in the corresponding category, while policies training with multiple objects even evaluated in the Mixed category with unseen objects. We run 10 independent trials seeds for each object-policy pair. \n\nThe real-world evaluation results are shown in Table~\\ref{tab:real_grasping} and Table~\\ref{tab:real_door_opening}. The policy trained in the simulator with point cloud input can directly transfer to the real-world without fine-tuning.\nFor both two tasks, our policy can even deploy on objects that have never been seen during the training. Moreover, We find that for grasping, training on multiple objects can ensure better performance than single-object training. Compared with the EigenGrasp baseline shown in Table~\\ref{tab:real_grasping}, our policies trained on both single-object and multi-object performs better, even if the EigenGrasp baseline use the object model information. Since EigenGrasp + motion planning is a open-loop manipulation policy, small error in object and scene modeling, e.g. initial object pose or object geometry, can lead to a failure in final grasp results. In contrast, our methods works in a closed-loop fashion with point cloud observation, which does not require privilege knowledge about the object.\n\n\\section{Introduction}\n\\label{sec:intro}\n\nDexterous manipulation has remained to be one of the most challenging problems in robotics~\\cite{rl-openai}. While multi-finger hands create ample opportunities for robots to flexibly manipulate objects in our daily life, the nature of the high degree of freedom and high-dimensional action space creates significant optimization challenges for both search-based planning algorithms and policy learning algorithms. Recent efforts using model-free Reinforcement Learning have achieved encouraging results on complex manipulation tasks~\\cite{rl-openai, dex-rl-valve}. However, it still faces many challenges in generalizing to diverse objects and being deployed on multi-finger hand in the real-world. \n\nFor example, the dexterous manipulation framework proposed by OpenAI et al.~\\cite{rl-openai} can solve in-hand manipulation of Rubik's Cube with RL and transfer to the real robot hand. However, the policy is only trained with one particular object and it is not able to \\emph{generalize to diverse objects}. \nTo achieve cross-object generalization, recent efforts proposed to learn robust 3D point cloud representations~\\cite{corl21-inhand, ilad, rl-pc-inhand-generalization, mu2021maniskill} with diverse objects using RL in simulation. While point cloud input has also been shown easier for Sim2Real transfer~\\cite{zhang2022close} given its focus on geometry instead of texture, the assumption on the access of complete object point clouds and ground truth states limit the transferability of above methods to the \\emph{real robot deployment}. Among these works, Chen et al.~\\cite{corl21-inhand} showed that cross-object generalization is achievable in simulation without knowing the shape of the object to grasp, but its requirement of real-time access to the object states is itself a very challenging robot perception problem, especially under large occlusions during hand object interaction. \n\nIn this paper, we provide a sim-to-real reinforcement learning framework for generalizable dexterous manipulation, using two tasks with the Allegro Hand\\cite{allegro}: (i) object grasping where the test objects has not been seen during training; (ii) door opening where the test doors have levers of novel shape that has not been used in training.\nThe tasks are visualized in Figure~\\ref{fig:teaser}.\n\nthe robot needs to first rotate the lever to unlock the door latch and then pull the lever in a circular motion to open the door. \n\nWe perform our studies by training a point cloud based reinforcement learning policy in the grasping and door opening task. With this approach, we list the three key discoveries of our framework for learning generalizable point cloud policy below:\n\n(i) We justified that it is possible to achieve \\emph{direct sim-to-real transfer} for a dexterous manipulation policy with category-level generalizability when we use point cloud as the data representation.\n\n(ii) Raw point clouds captured by sensors often come with heavy occlusions and noise: only a very small portion of the points from the observation are representing the robot fingers. We propose to \\emph{imagine} the complete robot finger point clouds according to the robot kinematic model and use them to augment the occluded real point cloud observations. We find that explicitly augmenting the input by \\emph{imagined} points can help achieve better robustness and sample efficiency for reinforcement learning.\n\n(iii) Different from existing works that add contact information to the input of RL, we design a novel reward using contact pair information without adding contact to the observation. This practice remarkably improves sample efficiency as well as learning stability and avoids the dependency on contact sensor that is often unavailable for real robot models. \n\\section{Approach}\n\\label{sec:approach}\n\n\\begin{wrapfigure}{r}{7cm}\n\\vspace{-0.45cm}\n \\centering\n \\includegraphics[width=0.97\\linewidth]{figs\/real-experiment_setting.pdf}\n \\caption{\\small{\\textbf{Real-experiment Setup:} we use an Allegro Hand attached on an XArm6 and a RealSense D435 camera facing forward the robot.}}\n \\label{fig:robot_setup}\n \\vspace{-0.2in}\n\\end{wrapfigure}\n\nOur objective is to train a generalizable point cloud policy on a dexterous robot hand-arm system that is able to grasp a wide range of objects or open an closed door with RL. We aims at Sim-to-Real transfer without any real-world training or data. During testing, the robot can only access the single-viewed point cloud and the robot proprioception data. As is discussed before, training such a policy comes with numerous technical challenges, including reward design and imperfect point cloud information. In this work, we propose a novel reward design technique based on contact and imagined point cloud model to deal with these challenges.\n\n\\textbf{Preliminaries:}\nWe model the dexterous manipulation problem as a Partially Observable Markov Decision Process~(POMDP) $\\mathcal{M} = (\\mathcal{O}, \\mathcal{S}, \\mathcal{A}, \\mathcal{R}, \\mathcal{T}, \\mathcal{U}).$ Here, $\\mathcal{O}$ is the observation space, $\\mathcal{S}$ is the underlying state space, $\\mathcal{A}$ is the action space, $\\mathcal{R}$ is the reward function, $\\mathcal{T}$ is the transition dynamics, and $\\mathcal{U}$ generates agent's observation. At timestep $t$, the environment is at the state $s_t\\in\\mathcal{S}$. The agent observes $o_t\\sim\\mathcal{U}(\\cdot|s_t)\\in\\mathcal{O}$. The agent takes action $a_t$ and receives reward $r_t = \\mathcal{R}(s_t, a_t)$. The environment state at timestep $t+1$ then transit to $s_{t+1}\\sim\\mathcal{T}(s_t, a_t)$. The objective of the agent is to maximize the return $\\sum_{t=0}^T\\gamma^tr_t$, where $\\gamma$ is a discount factor. \n\n\\textbf{System Setup:} \nMany previous works on learning-based dexterous manipulation attach the hand to a fixed platform to simplify the experiment environment in the real world. In this work, we create a more flexible and powerful dexterous manipulation system which includes both the robotic hand and the arm~(Figure~\\ref{fig:robot_setup}). Concretely, we attach the Allegro Hand to an XArm6 robot. Allegro Hand is a 16-DoF anthropomorphic hand with four fingers and XArm6 is a 6-DoF robot arm. We place a RealSense D435 camera at the right front of the robot to capture the point cloud. This setup brings additional challenges to RL exploration and Sim2Real deployment. We use SAPIEN~\\cite{sapien} platform with uses a full-physics simulator to build the environment of the whole system. The simulation time step is $0.005s$ to ensure stable contact simulation. Each control step lasts for $0.05s$.\n\n\\textbf{Tasks and Objects:}\nIn this paper, we expect our robot to perform the grasping task over a diverse set of objects and to open a locked door by rotating the lever. In the grasping task, we first select a random object from an object dataset and place it on the table. The robot is then required to move it to a target pose. Moreover, the robot should be able to generalize to different initial states, so we randomize both the initial pose and the goal pose for each trial. In simulation experiments, we use bottles and cans from both ShapeNet~\\cite{shapenet} dataset and YCB~\\cite{ycb} dataset. \nIn real-world experiments, we use novel unseen objects to test the policy. \nIn the door opening task, the robot hand is required first to rotate the lever to unlock the door latch and then pull the lever in a circular motion. We use three doors to test the policies both in the simulation and the real world, only one is used for training and the other two are unseen doors. We also randomize the initial pose for each trial. \n\n\\textbf{Observation Space:} \nThe observation contains both visual and proprioceptive information with four modalities: (1) Observed point cloud provided by the camera; (2) Proprioception signals of the robot including joint positions and end-effector position; (3) Imagined hand point cloud proposed in Sec.~\\ref{sec:imagination}; (4) Object goal position provided in each trial. All the information is accessible on the real robot. The dimension of each observation modality is shown in Figure~\\ref{fig:pipeline}.\n\n\\textbf{Action Space:}\nThe action is responsible for controlling both the $6$-DOF robot arm and the $16$-DOF hand. It has $6+16=22$ dimension in total. The robot arm is parametrized by the 6D translation and rotation of the end-effector relative to a reference pose. We use the damped least square inverse kinematics solver with a damping constant $\\lambda = 0.05$ to compute the joint motion. Each finger joint of the Allegro hand is controlled by a position controller. Both robot arm and hand are controlled by PD controllers. \n\n\\textbf{Network Architecture:}\nThe network architecture is visualized in Figure~\\ref{fig:pipeline}, \n\n\\subsection{Reward Design with Oracle Contact}\n\\label{sec:reward}\nSince we aim to solve the dexterous manipulation problem with pure RL, the reward design is central to the method. We need a good reward function to ensure proper interaction between the robotic hand and the object. The whole interaction process consists of two phases. The first phase is to simply reach the object. The second phase is to grasp the object and move it to the target, which is more challenging. For the first phase, we encourage reaching with the following reaching reward:\n\\begin{equation}\n r_{\\rm reach} = \\sum_{\\rm finger}\\frac{1}{\\epsilon_r + d(\\textbf{x}_{\\rm finger}, \\textbf{x}_{\\rm obj})}.\n \\label{reach_reward}\n\\end{equation}\nHere, $\\textbf{x}_{\\rm finger}$ and $\\textbf{x}_{\\rm obj}$ are the Cartesian position of each fingertip and the target object. Note that $\\textbf{x}_{\\rm obj}$ is available when we perform training in simulation. However, using this reward alone cannot ensure proper contact for grasping. For example, the robot can touch the object with the back of the hand rather than the palm and then get stuck in this local minimum. Therefore, we introduce a novel contact reward to guarantee meaningful contact behavior:\n\\begin{equation}\n r_{\\rm contact} = {\\rm \\textbf{IsContact}}({\\rm thumb}, {\\rm object})\\ {\\rm \\textbf{AND}}\\ \\left(\\sum_{\\rm finger} \\textbf{IsContact}({\\rm finger}, {\\rm object}) \\geq 2 \\right).\n\\end{equation}\nThis contact reward function outputs a boolean value in $\\{0, 1\\}$. It outputs $1$ only if the thumb is in contact with the object and there are more than one finger in contact with the object. Intuitively, it encourages the robot to cage the object within fingers. In this case, the robot can quickly find out stable grasping and lift the object to the target location. The lifting behavior is encouraged by\n\\begin{equation}\n r_{\\rm lift} = r_{\\rm contact}{\\rm \\textbf{Lift}}(\\textbf{x}_{\\rm obj}, \\textbf{x}_{\\rm target}).\n\\end{equation}\nThe \\textbf{Lift} function is basically in the form of Equation~\\ref{reach_reward} and the main difference is that it will return a large reward value upon task completion. The overall reward function is a weighted combination of the terms above plus a control penalty:\n\\begin{equation}\n \\mathcal{R} = w_{\\rm reach} r_{\\rm reach} + w_{\\rm contact} r_{\\rm contact} + w_{\\rm lift} r_{\\rm lift} + w_{\\rm penalty} r_{\\rm penalty}.\n\\end{equation}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/CoRL_sim_pipeline_camera_ready.pdf}\n \\caption{\\small{\\textbf{Architecture:} our feature extractor takes the observed point cloud, imagined point cloud, robot proprioception, and goal pose as input to output a feature embedding. Both actor and critic take the same feature to predict action and value. The red point represented the imaged point cloud of robot hand. Note that our network does not require RGB information.}}\n \\label{fig:pipeline}\n \\vspace{-0.1in}\n\\end{figure}\n\n\n\\subsection{Imagined Hand Point Cloud}\n\\label{sec:imagination}\nThe usage of point cloud comes with two challenges. The first challenge is occlusion, which may occur to both the object under manipulation and the hand itself. When the robot hand is interacting with an object, the fingers may be occluded by the object. Since we do not assume tactile sensors in this work, this occlusion problem can be serious. The second challenge is the low point cloud resolution during RL training, where we can only use a limited number of points due to the memory limit. In this case, the number of points from hand finger may not be adequate to precisely capture the spatial relationship between the robot and the object. We propose a simple yet effective method to handle both issues in a unified manner. Our idea is to use an imagined hand point cloud in the observation to help the robot to \\textit{see the interaction}. \n\nWe provide one example in Figure~\\ref{fig:pipeline}. Black points indicate the point cloud captured by the camera, in which some important details of fingers are missing. These missing details provide crucial information of the interaction. Though such interaction information can also be inferred by combining the information from both the proprioception and visual input, we find that the best way is to synthesize these missing details. Concretely, we can compute the pose for each finger link via forward kinematics given the joint position from the robot joint encoder and the robot kinematics model. Then, we synthesize the imagined point cloud~(blue points in Figure~\\ref{fig:pipeline}) by sampling the points from the mesh of each finger link. This process is possible in both simulation and the real world. \n\n\\subsection{Training}\n\\label{sec:impl}\nWe adopt Proximal Policy Optimization~(PPO)~\\cite{schulman2017proximal} to train the agent in simulation, and then deployed to real without real-world fine-tuning. The network architecture is illustrated in Figure~\\ref{fig:pipeline}. Both value and policy networks share the same visual feature extraction backbone. We concatenate the observed point cloud with the imagined hand point cloud together as the input to the feature extractor. We also attach a one-hot encoding to each point which indicates whether it is observed or an imagined point. \n\\section{Related Work}\n\n\\textbf{Dexterous Manipulation.} \nDexterous manipulation aims to enable robotic hands to achieve human-level dexterity for grasping and manipulating objects. Analytical methods have been proposed to adopt planning to solve this problem~\\citep{salisbury1982articulated,analytical-dex,rus1999hand, grasping-book, vkumar-survey, bohg-survey,Dogar2010}. However, they rely on detailed object models, which are not accessible when testing on unseen objects. To mitigate this issue, researchers propose a learning-based method for dexterous manipulation~\\citep{learning-based-review}. \nSome methods~\\citep{gps-manipulation, rl-openai, dex-rl-valve, corl21-inhand, rl-pc-inhand-generalization} take a reinforcement learning~(RL) approach, while another line of work~\\citep{il-dex, il-vr, soil, coarse-to-fine-il, chen2022learning} also proposes to learn from demonstration using imitation learning~(IL) to acquire the control policy. Recently, the combination of RL and IL~\\citep{dapg, tactile-guided, hand-teleop, corl20-suboptimal, dexmv, ilad} has also shown encouraging dexterous manipulation results in simulation. However, most methods are either learning policy for one single object or assume access to object states produced by perfect detector which increases challenges to Sim2Real transfer. In this paper, we surpass these limitations by introducing training on multiple objects and applying point cloud inputs for control. \n\n\\textbf{Point Cloud in Robotic Manipulation.} \nPoint cloud representation has been widely applied in the robotics community. Researchers have studied matching the observed point clouds to an object in a grasping dataset~\\citep{pointcloud-registration} and executing the corresponding grasping action~\\citep{dexnet-2017rss, bohg-survey}. For learning-based approaches, one line of research has focused on first estimating grasp proposals or affordance given the point cloud input and then planning accordingly for manipulation~\\citep{grasp-detection-pc, pointnet-gpd, 6dof-grasp, s4g, sparse-pc-grasp, wu2020grasp, wei2021gpr, ral2022-grasp-pc-dvgg, wang2022goal}. While these methods are designed for parallel-jaw grippers, recent advancements have also been made for grasping with dexterous hands with similar approaches~\\cite{Andrews2013, varley2015generating, brahmbhatt2019contactgrasp,lu2020planning}. However, this line of methods requires feedback from motion planning to estimate whether a grasp is plausible. Oftentimes, a stable grasp pose is proposed but it is not achievable by planning. To achieve efficient and flexible manipulation, a Reinforcement Learning policy with point cloud inputs is proposed~\\cite{corl21-inhand, rl-pc-inhand-generalization,ilad}, which allows the robot hand to flexibly adjust its pose while interacting with the object. However, these approaches still face challenges in transferring to the real robot, given the noisy, occluded point clouds in the real world, and training RL policy with noisy point clouds increases the optimization difficulty. In this paper, we propose to use imagined point clouds, contact information, and multi-object training to combat these problems and achieve Sim2Real generalization.\n\n\\textbf{Using Contact Information for Manipulation.}\nHumans can manipulate objects purely from tactile sensing without seeing them. This biological fact inspires researchers to integrate contact and tactile information into the learning pipeline~\\cite{tactile-feedback,yuan2017gelsight,calandra2018more,murali2018learning,lambeta2020digit,lee2020making,bhirangi2021reskin}. For example, both tactile and visual information inputs are combined together in~\\citep{lee2020making} for decision making. The tactile information is also utilized as inputs for RL-based manipulation policies~\\cite{tactile-features, tactile-intrinsic, tactile1, tactile-guided,xu2021towards}. For example, visual-tactile sensor is used with an Allegro hand in simulator for playing piano~\\citep{xu2021towards}. Instead of using contacts as inputs, some methods also use contact and tactile information as reward to encourage exploration and boost policy learning~\\cite{tactile-intrinsic, tactile-guided}. Motivated by these works, we provide a novel design of reward based on each contact link of the robot hand, which encourages more reasonable grasping behavior. Our design does not require a real tactile sensor when training in simulation or deploying in the real robot. \n\\section{Environment Details}\n\\vspace{-0.08in}\n\n\\textbf{Object Set.} \nFor single-object experiments, we use the single object mesh from the YCB dataset for training and testing. For multi-object experiments, we chooses 10 objects from the ShapeNet dataset~\\cite{shapenet} for training, and another 40 objects for testing in simulation. The test objects of real-world experiments are described in the main paper.\n\n\\textbf{Object Pre-processing.} \nTo use the above object models in the simulation experiments, we apply the object preprocessing. The preprocessing includes two steps: scaling and convex decomposition.\nThe scaling step is to ensure that the object size is within a proper ranger for manipulation. The YCB objects~\\cite{ycb} are captured by the real scan, so the scale of object mesh from YCB dataset aligns with the real counterpart and can be used for robot manipulation directly. Different from the YCB dataset, the ShapeNet dataset does not contain any scale information, e.g. the mug in ShapeNet can be larger than the robot. Therefore, we scale each object based on the diagonal length of its bounding box. For each category, we manually select a diagonal length so that all object instances from the category will have the same bounding-box diagonal length after scaling. Besides, we will not use objects with non-manifold geometry to avoid instability in physical simulation. The next step is convex decomposition. We use V-HACD~\\cite{mamou2009simple} to decompose both YCB object and ShapeNet object into convex parts with default parameters. We will not use the object with more than 40 parts after convex decomposition for both stability and efficiency of the physical simulation.\n\n\\textbf{Reward.}\nAs is mentioned in Section 3.2 of the main paper, the overall reward for our task is composed of four parts: reach, contact, lift, and action penalty.\n\n\\begin{equation}\n \\mathcal{R} = w_{\\rm reach} r_{\\rm reach} + w_{\\rm contact} r_{\\rm contact} + w_{\\rm lift} r_{\\rm lift} + w_{\\rm penalty} r_{\\rm penalty}.\n\\end{equation}\nIn the implementation, we set the lift reward as the the difference between object current height and object initial height $r_{\\rm lift} = h_{\\rm current} - h_{\\rm init}$. The action penalty reward $r_{\\rm penalty} = - ||a||_2^2$. For reaching reward, it consists of the distance between object and target, the distance between finger tip and the target. \nThe weight for the four reward terms are: $w_{\\rm reach}=1$, $w_{\\rm contact}=0.5$, $w_{\\rm lift}=10$, $w_{\\rm penalty}=0.01$. Recall that the lift reward $r_{\\rm lift}$ is set to 0 if the contact reward $r_{\\rm contact}$ is $0$. Intuitively, it ensures that the robot can only receive the reward for lifting the object up if the robot hand is in good contact with the object. This design prevents the agent from using a large force to knock the object into the air or using unstable contact to lift the object up.\n\n\\section{Learning Details}\n\n\\textbf{RL Training.}\nWe use on-policy RL training for setting except the distilling experiments. We select PPO as the RL algorithm to train the point cloud based manipulation policy. The hyper-parameters of the PPO are shown in Table~\\ref{tab:ppo}.\n\n\\textbf{Network Architecture}\nThe RL agent use the PointNet based architecture as the visual backbone. As shown in Figure 3 of the main paper, we first concatenate the visual features from PointNet and the propriception feature from the MLP. This concatenated feature is then shared by both value network and policy network to predict value and action. The details for the network architecture are shown in Table~\\ref{tab:arch}.\n\n\\begin{minipage}{.25\\textwidth}\n\\resizebox{0.95\\columnwidth}{!}{%\n\\begin{tabular}{l|c} \\shline\nParameter & Value \\\\ \\shline\nMini-Batch Size & 500 \\\\ \\hline\nLearning Rate & 3e-4 \\\\ \\hline\nClip Range & 0.8 \\\\ \\hline\nHorizon & 200 \\\\ \\hline\nEpoch & 10 \\\\ \\hline\nSteps per Iteration & 10 \\\\ \\hline\n\\end{tabular}\n}\n\\captionof{table}{PPO Parameters.}\n\\label{tab:ppo}\n\\end{minipage}\n\\hspace{0.5em}\n\\begin{minipage}{.73\\textwidth}\n\\resizebox{0.95\\columnwidth}{!}{%\n\\begin{tabular}{ccc}\\shline\nModule & Architecture & Output Dim\\\\\\shline\n\n\\multirow{2}*{Visual Feature Extractor} & PointNet Local Channel: (64, 128, 256) & 256 \\\\ \n& PointNet Global Channel: (256, ) & 256 \\\\ \\hline\n\n\\multirow{1}*{State Feature Extractor} & MLP: (64, 64) & 64 \\\\ \\hline\n\n\\multirow{1}*{Actor} & MLP: (64, 64) & 64 \\\\ \\hline\n\n\\multirow{1}*{Critic} & MLP: (64, 64) & 64 \\\\ \\hline\n \n\\end{tabular}\n}\n\n\\captionof{table}{Network Architecture.}\n\\label{tab:arch}\n\\end{minipage}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Background and Related Work}\\label{sec:related-work}\nThe original author of grammar-based visualization concept is \\cite{Wilkinson}, changing the way\nscientists and developers think, as well as inspiring them. Stanford's team in \\cite{Polaris} has\nplanted the seed of the grammar of graphics' software, after which gobs of systems developed ---such\nas Tableau in \\cite{tableau}, ggplot2 in \\cite{ggplot2}; although their user preferences'\ncustomization is limited, they brought in abstraction of data models, graphical geometries, visual\nencoding channels, scales, and guides (e.g., axes and legends), yielding more expressive design\nspace. JSOL's architecture heavily influenced by these works; inheriting from Wilkinson's\ngrammar and components; rendering basic graphical scenes using these components and the grammar that\nmakes these components generate a full scene. The component instance could be a visual channel such\nas \\textit{position}, \\textit{color}, \\textit{shape}, and \\textit{size}, may also include common\ndata transformations, as a sample, binning, aggregation, sorting, and filtering. The grammar is the\nvalidation step that is responsible for making these components work in flow and mapping data\nattributes to its component in the scene.\n\n\\subsection{Specification}\nVisualization libraries are of a distinct number of architectures' types: \n\\begin{itemize}\n \\item \\textbf{Hierarchical:} Layers and components are implemented in a hierarchical view, so that\n when a new scene graph introduced, authors build it in layers employing the components. In case\n that the graphic depends on the same component they will be linked.\n \\item \\textbf{Parallel:} Building layers and components in two parallel independent levels, so\n when a scene graph proposed users use the component given by the system to implement it or to\n customize their graphics.\n \\item \\textbf{Hybrid:} It is a combination of the previous\n couple of types. The compiler and layers are hierarchical; however, layers' design is parallel\n (JSOL\\space adopts this type).\n\\end{itemize}\n\nThe compiler added on the top layer makes JSOL\\space a declarative \\textit{domain specific\n language (DSL)} for visualization design; by decoupling specification from execution details,\ndeclarative systems allow users to focus on specifying their application domain without limiting\ntheir abilities to customize.\n\n\\cite{vega-lite} and \\cite{Bostock2009Protovis} followed the same approach of the DSL compilation criterion;\nhowever, they use a declarative framework for mapping data to visual elements. Nevertheless,\nJSOL\\space does not strictly impose a toolkit-specific lexicon of graphical marks; instead,\nJSOL\\space directly maps data attributes to the HTML5 canvas element.\n\n\\subsection{Other Libraries}\\label{sec:comparative-study}\n\\subsubsection{GGPlot2}\\label{sec:ggplot2}\nIn \\cite{ggplot2}, there is excessive concentration on low-level details which makes plotting a\nhassle (e.g., drawing guides). However, it provides a dominant model for graphics that makes it easy\nto produce complex multi-layered scenes. You can hardly find demerits in \\textit{ggplot2}, but one\nof them is the language it uses, \\textit{R}, as discussed earlier, is a programming language that\nuses a specific interpreter, making it laborious for users to install for developing some simple\nvisualization scenes, yet it is normal for R audience to use. Furthermore, R does not support\nvariables' or methods' labeling which makes users struggle when using it. Besides, it only works on\nCPUs, making it slow against other libraries.\n\n\\subsubsection{D3}\\label{sec:d3}\nMany scientists, researchers, developers, and even programs make use of \\textit{D3}, it is nearly\nsuitable for each user, but there are few crucial drawbacks; it is entirely low-level visualization\ngrammar which is hard for novice users to manage, one extra drawback appears when working on an SVG\nelement in big data, which is generating a ton of elements that may break browsers as a result of\nthe load added on the browser's bag.\n\n\\subsubsection{Protovis}\\label{sec:protovis}\nWe conducted a comparison between \\cite{Bostock2009Protovis} and JSOL\\space in Table \\ref{table:1}, \\cite{Bostock2009Protovis} composes custom views of data with simple marks such as pies (\\includegraphics*[height=\\fontcharht\\font`\\B]{basic_pie}) and dots. Unlike low-level graphics libraries that quickly become tedious for visualization, Protovis defines marks through dynamic properties that encode data, allowing inheritance, scales and layouts to simplify construction. However it is no longer under active development and that make it unbearable for different types of users.\n\n\\subsubsection{Vega}\\label{sec:vega}\nIn a previous section we saw that Vega specification is simply a JSON object that describes an interactive visualization, and that may appear akin to JSOL. However vega uses \\cite{2011-d3} as a backend engine to produce and provide a SVG visulization components given a user-grammars. On the other hand, JSOL\\space specification may be cross-compiled to provide a reusable visualization component, of course given the user input grammars.\n\n\n\\subsubsection{Vega-Lite}\\label{sec:vega-lite}\nWe conducted a comparative study between \\cite{vega-lite} and JSOL\\space in Table \\ref{table:1},\nthese two libraries are chosen for a particular purpose which is that the users may confound that\nthey are equal, true that they are similar in attributes, but not in drawbacks. JSOL\\space\nspecifically developed for making a complete detailed scene, that is why it collects all\nspecification that developers or scientists seek to configure their scene.\n\\begin{table}[tbh]\n \\centering\n \\caption{Comparative Study}\\label{table:1}\n \\renewcommand{\\arraystretch}{1.0}\n \\begin{tabular}{|l|l|l|l|l|l|}\n \\hline\n \\multicolumn{3}{|l|}{Grammar of Graphics layers} & Protovis & Vega & JSOL \\\\ \\hline\n \\multicolumn{3}{|l|}{Transformation layer} & Done & Done & Done \\\\ \\hline\n \\multicolumn{3}{|l|}{Data layer} & Done & Done & Done \\\\ \\hline\n \\multirow{17}{*}{\\begin{tabular}[c]{@{}l@{}}Geometry \\\\ layer\\end{tabular}} & \\multicolumn{2}{l|}{Point} & Done & Done & Done \\\\ \\cline{2-6} \n & \\multirow{4}{*}{Bar} & bar chart & Done & Done & Done \\\\ \\cline{3-6} & & \\begin{tabular}[c]{@{}l@{}}Stacked bar\\\\ chart\\end{tabular} & Done & Done & Done \\\\ \\cline{3-6}\n & & Histogram & Done & Done & Done \\\\ \\cline{3-6} & & \n \\begin{tabular}[c]{@{}l@{}}Vertical bar\\\\ chart\\end{tabular} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multirow{2}{*}{Area} & Area chart & Done & Done & Done \\\\ \\cline{3-6}\n & & \\begin{tabular}[c]{@{}l@{}}Stacked Area \\\\ chart\\end{tabular} & Done & Done & - \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Text} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Line} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Marks} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{HLine} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{VLine} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Pie chart} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Arc chart} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multicolumn{2}{l|}{Picture} & Done & Done & Done \\\\ \\cline{2-6}\n\t& \\multirow{2}{*}{Tick} & Dot plot & Done & -& - \\\\ \\cline{3-6}\n\t& & Strip plot & Done & - & - \\\\ \\hline\n \\multicolumn{3}{|l|}{Scale layer} & Done & Done & Done \\\\ \\hline\n \\multirow{6}{*}{Axes Layer} & \\multicolumn{2}{l|}{Cartesian coordinate} & Done & Done & Done \\\\ \\cline{2-6}\n & \\multicolumn{2}{l|}{Coordinate equal} & Done & - & Done \\\\ \\cline{2-6}\n & \\multicolumn{2}{l|}{Coordinate flip} & Done & - & Done \\\\ \\cline{2-6}\n & \\multicolumn{2}{l|}{Coordinate polar} & Done & - & Done \\\\ \\cline{2-6}\n & \\multicolumn{2}{l|}{Parallel coordinate} & Done & Done & Done \\\\ \\cline{2-6}\n & \\multicolumn{2}{l|}{Polar parallel coordinate} & - & - & Done \\\\ \\hline\n \\multicolumn{3}{|l|}{Aesthetics layer} & Done & Done & Done \\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nWe have presented the JavaScript Open-source Library (JSOL), the grammar of graphics JavaScript library which has the power to\ngenerate the most captivating plots with no limitations. JSOL\\space is implemented using\nJavaScript, which enabled it to be integrable to many other fields. We also demonstrated how it\nworks internally by describing its layers' specification, role, and the architectural design. We\ndiscussed how JSOL\\space accompanies the Human-Computer Interaction (HCI) principles. Manipulating data is a crucial part of JSOL, the way data loaded from various sources, and how transformation (e.g., filtering and grouping) and statistical (e.g., mean and std)\noperations are applied, the way of generating scales for mapping data fed to the geometries layer\nproffered JSOL\\space an edge while comparing to other libraries. Another contribution of the\nlibrary is the way of combining low-level visualization layer with the compilation process. And we\nconsidered a comparative study between JSOL\\space and great libraries such \\cite{2011-d3} and\n\\cite{ggplot2}, and a comprehensive comparison with \\cite{vega-lite}. Finally, we explained how to\nuse JSOL\\space through numerous examples such as the parallel plot in Cartesian and Polar\nscales.\n\n\n\\section{The JSOL}\\label{sec:brief-insight-libn}\nJSOL\\space amalgamates graphics' grammar with a state-of-art compilation process. Throughout\nthis section, we will cover how a simple scene is generated and constructed, together with how data\nprocessed; besides, the layers' design.\n\n\\subsection{Unit Specification}\nDoubtlessly, a scene must have a data variable. After all, that is what going to be a graphic. There\nare also some customization parameters ---such as transformation, geometries, properties, and a set\nof encodings. Transformation layer is responsible for applying filters and aggregation. After which,\nthe geometry layer visually encodes the incoming input.\n\n\\lstinline[style=customJS]{scene := (data, tansformations, geometries, properties)}\n\nDefining \\textit{properties} is optional; however, it is significant when it comes to details; as a\nway of illustration, imagine the case where a user wants to declare points' color in the scene or\nthe type of the data variable (e.g., CSV). \\textit{Scale} is also necessary as it determines how\ndata attributes mapped to traits of geometries. \\textit{Axes} enhances the readability of scales.\n\n\\lstinline[style=customJS]{properties := (geometries, data, functions, scale, axes, guide)}\n\n\\subsection{Layers of~JSOL}\nAs the unit specification complexity issue is evolving daily, layers built with simplicity in mind. Several parts of the kernel is the same as \\cite{2011-d3}'s, the other parts modified to flatter the interaction between the user and the library. We will discuss each layer's structure and role.\n\n\\subsubsection{Data Layer}\nAs the layer's name states, its role is to read data from various sources, for example, flat files (e.g., CSV and TSV), and open-standard files (e.g., JSON). Data could be an array of arbitrary values, numbers, strings,\nor objects. After reading the data source, the layer stores it in a pre-defined data structure for easily referencing through other layers; the data structure also supports CRUD operations. As shown in the example below, each dataset has a \\textit{name} for the consistency of multiple dataset loading, the \\textit{values} parameter is the source that contains the dataset, the \\textit{format} defines the source type.\n\n\n\\begin{lstlisting}\n\t\"data\": [{\n\t\t\"name\": \"troops\",\n\t\t\"values\": \"troops.csv\",\n\t\t\"format\": { \"type\": \"csv\" }\n\t\t}, {\n\t\t\"name\": \"cities\",\n\t\t\"values\": \"cities.csv\",\n\t\t\"format\": { \"type\": \"csv\" }\n\t}]\n\\end{lstlisting}\n\n\n\\subsubsection{Transformation Layer}\nThis layer's responsibility is to perform analytical operations on data\ngathered by the data layer, and it helps the user to perform many\ntransformations including filtering and grouping. Executing these\noperations is necessary to optimize the processing time. The layer\nexpects that its input comes from the data layer to do a valid procedure on the dataset. It consists of two sub-layers taken from \\cite{Wilkinson} which are \\textit{variables} and \\textit{algebra}.\n\n\\begin{lstlisting}\n\"transform\":[{\n \"lang\": \"R\",\n \"function\": \"fibonnaci\",\n \"properties\": {\n \"data\" : \"static\",\n \"length\" : 20,\n \"field\" : \"x\",\n \"name\": \"fibonnaci_x\"\n }\n}]\n\\end{lstlisting}\n\n\\paragraph{High-dimensional Spaces}\nLiving in a 3-D world restricts us from visualizing structures in high-dimensional spaces. The curse of dimensionality, as called by \\cite{bellman}, has been an impediment for so long with various solutions; however, we will only recall \\textit{nesting} as embracing all solutions is currently out of our scope. Nesting is a way to circumvent the challenge, in which we represent any two dimensions of the data on the X-axis and Y-axis. Nevertheless, points designed by these axes are separate charts (e.g., pie, bar, image and so on); in JSOL, we use a 9-block greyscale image in which each block's greyscale level is equal to the dimension's value \\textit{(see fig. \\ref{fig:nesting})}.\n\n\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{nesting_property_figure}\n\t\\caption{nesting caption}\n\t\\label{fig:nesting}\n\\end{figure}\n\\subsubsection{Scales Layer}\nA visual encoding is called \\textit{scale}. As to draw data in a scene, we need to map data values\nto their corresponding geometries. This layer is taken from \\cite{2011-d3}, supports both ordinal\nand quantitative (linear, logarithmic, exponential, quantile) values.\n\n\n\\begin{lstlisting}\n\"scales\": [{\n \"name\": \"yscale\",\n \"type\": \"linear\",\n \"range\": {\n \"type\": \"range\",\n \"value\": \"height\"\n },\n \"domain\": {\n \"data\": \"crimea\",\n \"field\": \"economy (mpg)\"\n }\n}]\n\\end{lstlisting}\n\n\\subsubsection{Statistics Layer}\nThe use of the statistics layer is optional since applying statistical functions is not the central\nobjective of each user. It is managed using the R programming language which gives us a handicap\nwhile comparing to other libraries as R is a user-friendly language built specially for statistical\nmodeling and inference, making it light to execute any statistical function on data in a few lines\nof codes. Moreover, it has a knowledgeable community and rich documentation.\n\n\\subsubsection{Geometry Layer}\nJSOL~implements the same \\texttt{d3.svg.shape} element provided by \\cite{2011-d3} in an HTML5 canvas element that is suitable for charting, providing the power of computational speed supported from \\textbf{WebGL}; the arc, for example, builds elliptical arcs such as pie (\\includegraphics*[height=\\fontcharht\\font`\\B]{basic_pie}), donut and cox-comb (\\includegraphics*[height=\\fontcharht\\font`\\B]{cox-comb}) charts via formulating arbitrary data to paths. Typically, this function bounded to the arc attribute, note that the radius and angles of the arc can be specified both as constants or callback functions. Additional shapes provided for areas, lines, mark symbols, etc.\n\n\\begin{lstlisting}\n\"geom\" : [{\n \"type\" : \"Point\",\n \"data\" : \"static\",\n \"properties\" : {\n \"x\" : \"xscale\",\n \"y\" : \"yscale\",\n \"fillColor\" : \"zscale\"\n }\n}]\n\\end{lstlisting}\n\n\\subsubsection{Axes Layer}\nThe layer is a crucial step in the graphical scene since it maps the scale to a meaningful form that is comprehensible by human eyes. Axes visualize spatial scale mappings using \\textit{ticks}, \\textit{grid lines}, and \\textit{labels}. JSOL\\space supports lot of axis based on a given scale, and currently supports axes for Cartesian (rectangular) and Polar coordinates.\n\n\\begin{lstlisting}\n\"axes\": [{\n \"type\": \"x\",\n \"data\": \"static\"\n \"field\": \"x\"\n \"orient\" : \"bottom\"\n \"grid\" : true\n}, \n{\n \"type\": \"y\",\n \"scale\": \"yscale\"\n}]\n\\end{lstlisting}\n\n\\subsubsection{Guide Layer}\nBoth \\textit{guides} and \\textit{axes} visualize scales; but guides aid interpretation of scales\nwith ranges such as colors, shapes, and sizes, whereas axes aid interpretation of scales with\nspatial ranges. Similar to scales and axes, guides can be defined either as a top-level or low-level\nvisualization.\n\n\\begin{lstlisting}\n\"guides\": [{\n \"type\": \"legend\",\n \"domain\": { ... },\n \"properties\": {\n \"title\": { \n \"name\": \"key map\"\n },\n \"position\": { ... }\n }\n}]\n\\end{lstlisting}\n\n\\subsection{Compiler of JSOL} \nThe JSOL's compiler is made purposefully to transform the library from low-level visualization library to a high-level one, making it effortless for users to interpret their parameters to build imaginative scenes, the compiler also has predefined value for each parameter; consequently, the headache of passing values uprooted.\n\nThe user's specifications pass through several phases or stages\nto become a beautiful chart, the first one is \\textit{scanning} user's\nspecifications and divide them into sections basing on their's role; \\textit{parse}, which is subject to prepare the low-level representation and fill the missing parameters; \\textit{linking}, where layers' objects attached to each other, and finally \\textit{assemble} is where the full chart comes alive.\\\\\n\nIn JSOL~compilation process the first stage is scanning, here where the users' specification been parsed and validated, JSOL~ provides tons of rules, for example each type of scales like linear scale is considered a rule, can be applied by users and developers each of these rules follow some validation steps so we should certain that the JSOL~ users will follow it ensure that the scene will be generated. For example some geometry components must have a color pallets, so if the user forgot to set it the compiler must set it to a default color pallet and so on.\n\nSecond things second is connecting phase, after the user's specifications are validated, this phase is responsible for generating these specifications, by generating we means that should transform the user's specifications to layers executable classes, function and api's. Also these transformations require searching for a huge combinations tree of components, and might be some specification exists that are not required so these phase is responsible to check that each specification transformation is required to build the scene, for example the user might put for the same data source a file path and file url, and one of them are necessary.\\\\\n\nAfter builing phase connecting is required, and here's where linking phase responsibility comes. Linking main responsibility is to connect layers to each other, for example axis and geometry layers requires scale layer, so the JSOL~ requires to traverse for all user's specification and search for each node connection and connect\/link it for it's requirements. Another responsibility is to check for linking or connecting acceptability, not all layers accept to connect to each other, so JSOL~ require to check if each user's connection is accepted or not.\\\\\n\nLast thing last is assemble phase, where the user will see the result of his\/her hand. assemble phase take all transformed specification that turned to layers functions\/api and execute it to the selected HTML5 Canvas, the phase is similar to code generating and optimizing where it takes each layer's functions and put them in a queue to be executed in the same order, for example, we have to execute scales before axis, and run color palettes before geometries, and so on.\n\n\n\\subsection{JSOL\\space from \\textbf{H}uman \\textbf{C}omputer \\textbf{I}nteraction perspective}\n\\subsubsection{Discoverability}\nFrom (Norman, 2002)'s point of view, \\textit{discoverability} is to figure out possible actions and\nhow to do them. Meanwhile, (Nielsen, 1994) suggests that \\textit{discoverability} is minimizing the\nuser's memory load by making objects, actions, and options visible as possible. Instructions for the\nuse of the system should be visible or easily retrievable. In JSOL, layers are understood by\ntheir name (i.e., geometries layer clearly read that it is for generating geometrical objects in a\nscene).\n\n\\subsubsection{Mapping}\n\\cite{Norman} believes that \\textit{mapping} is a technical term which means the relationship\nbetween two instances of things (data and its visual representation in our case). On the other hand,\n\\cite{Nielsen} says that a system would exhibit \\textit{mapping} if it speaks the users' language,\nwith words, phrases, and concepts familiar to the user, rather than system-oriented terms. Follow\nreal-world conventions, making information appear in a natural and logical order. As JSOL\\space\nis a visualization library, its foremost concern is mapping data to graphics, using same words and\nphrases the other libraries apply, making understanding it or switching from any library a soft\ntouch.\n\n\\subsubsection{Affordance}\nOne way to make the interface both manageable and usable is to design interfaces that by their very\ndesign inform users how to. \\cite{Norman} defined \\textit{affordance} to be a relationship between\nthe properties of an object and the capabilities of the agent that determine how the object could be\npossibly used. As illustrated earlier, JSOL\\space uses keywords and functions which tell the\nuser how it operates.\n\n\\subsubsection{Structure}\nAs \\cite{Constantine} proposes, a software would employ the \\textit{structure} principle if its\nuser's interface design is organized purposefully, in meaningful and useful ways based on precise,\nconsistent modes that are apparent and recognizable to users, putting related things together and\nseparating unrelated things, differentiating different things and making similar things resemble one\nanother. JSOL's user grammar based components which come from internal layers provide a\nstructure by itself that helps to manage the overall graphical scene generation.\n\n\\subsubsection{Ease\/Comfort}\nEase and Comfort are two similar ideas come from the principles of universal design,\n\\cite{uni_design} defined \\textit{ease} as using software efficiently, comfortably and with a minimum\nof fatigue. While defining \\textit{comfort} as presenting appropriate size and space for approach,\nreach, manipulation, and use regardless of user's body size, posture, or mobility. It is quite\napparent that these two concepts achieved in JSOL\\space because of the compiler as illustrated\npreviously.\n\n\\subsubsection{Flexibility}\nDefined by \\cite{Nielsen} as speeding up the interaction for the expert user such that the system\ncan cater to both inexperienced and experienced users. Allowing users to tailor frequent\nactions. \\cite{uni_design} represents it as either the design accommodated a wide range of individual\npreferences and abilities or not. JSOL\\space obeys both definitions since it allows its users to\ngenerate the same plot in many ways, hence making proper for both novice and expert users from\nvarious domains.\n\n\n\\section{Introduction}\\label{sec:introduction}\nThe most popular visualization steps, generally speaking, and particularly for data visualization is as easy as to get your dataset into a visualization library (or system), defining your charting information, and then you have the chart you need. These steps could be implemented in a multitude of ways; while the technology is rising, most of the software is attempting to cope up and contain the top used graphs and visualization scenes and also modifying the way to generate these graphs by using a user interface to make it milder to construct graphs with few clicks. This methodology has the advantage when it comes to quick and easy use, but the disadvantage is that this software is restrictive and does not allow internal customizations and modification. The other alternative is to use a charting system such as vector drawing which permits the user to customize every single piece of the graph, the interest here is that the user can build the chart freely without any restrictions. However, it is troublesome to use for most of the users and consumes a considerable amount of time to build a full scene; thus, the need arises for a visualization library that combines all of the merits.\n\n\\subsection{Grammar of Graphics and Other Systems} \nGrammar of Graphics (GoG) is a set of rules put together to create a scene that expresses the data; divided into two categories, \\textit{Low-Level} grammar, that is used to customize each piece of the visualization scenes; mainly used for exploratory data analysis as their primitives offer fine-grained control ---like scenes used in analysis tools. By way of illustration, \\cite{2011-d3} and \\cite{Bostock2009Protovis}. On the other hand, \\textit{High-Level} grammar is used to make traditional visual plots expeditiously. It is useful to users and analysts for rapid development. Examples include \\cite{ggplot2}, \\cite{vega-lite}, and grammar-based systems such as \\cite{tableau}. High-level grammars are more prevalent as users prefer conciseness over expressiveness. Furthermore, In contrast to the \\textit{low-level} grammars, the \\textit{high-level} grammars use default values to resolve ambiguities of visualization, hence development is comfortable for analysts and developers, At last there is some libraries that are in the middle level such \\textbf{protovis}, those libraries tries to combines the ease of use and the non-restrictive methodology to generate a middle-ware level that combine the advantages of each of low-level and high-level.\\\\\nChoosing between the low-level or high-level is not easy, user should take in consideration some concepts, such as the time-consumption or in other words how long it takes to build the scene? and thats called \"\\textit{efficiency}\", another concept is do the user have the knowledge to build the scene or is the knowledge exists to make the user learn how to use the system\/library and that's called \"\\textit{accessibility}\", at last can the user build the scene or not? and this is the \"\\textit{expressiveness}\", every user should take these concepts in consideration when it comes to generate a beautiful plots, sometimes it confuses the users when it comes to choose between those levels and that's the main reason for inventing the middle-level and the hybrid-libraries.\n\n\n\\subsection{Why Another Library?}\\label{sec:why-another-library}\nJSOL~belongs to high-level grammars because of the compiler it presents, which has built-in default values passed to low-level layers when not specified by the user. As a result, rapid development accomplished. Moreover, JSOL's layers are a real representation of low-level grammars: representing customization for each visualization component.\n\n\\bigskip\n\nJSOL's layers are built using \\textit{JavaScript} scripting language with its compiler which uses \\textit{JSON} (JavaScript Object Notation) to interpret the user's definition of specifications. Consequently, it entirely works and runs in web browsers that support JavaScript. Specifically, JSOL\\space runs in \\textit{HTML5 canvas} elements that allow for high\nperformance.\n\nUsers may think that the debugging will be problematic; however, that is not the case because of JSOL's compiler having an inherent error handling layer that provides the user with a proper debugger and informative error\/warning messages.\n\nThere are some objectives we cannot miss in any visualization library,\nwhich are:\n\\begin{itemize}\n \\item\\textbf{Performance}: high-level abstraction may limit the user's ability to generate fast\n visualization scenes. However, building charts on top of HTML5 Canvas elements allow\n JSOL\\space to use \\textit{GPU} acceleration to speed up the process, even though we will be\n shifting the responsibility of data representation and transformation to the user and no longer\n treat it as our concern.\n \\item\\textbf{Debugging}: trial and error is a fundamental part of the development and learning\n processes; accessible tools must be designed to support debugging when errors occur. As\n JSOL\\space was built using JavaScript, it allows users to use various types of debugging\n tools, on account of the built-in test\/debug layer that enhances the package's efficiency.\n\\end{itemize}\n\n\\bigskip\n\nJSOL~comprises low-level layers, called \\textit{modules} or \\textit{kernels}, that are made specially to carry all the burden of a given task. Visualization primitives alternatively named \\textit{marks} in \\cite{vega-lite} or shape in \\cite{2011-d3}, provide geometries or shapes for chartings, such as a bar, point, and an arc. Another layer is the data encoding layer, also called \\textit{scale}, it allows mapping data points to encoded pixels to bring data to life, for using a particular scale layer you need to generate a reference via the \\textit{Axes} layer which allows creating \\textit{Cartesian} or \\textit{Polar} coordinated charts and plots. There is also a layer for data munging, called \\textit{algebra}, made for practicing operations on data such as \\textit{join} (left and right), \\textit{cross}, and \\textit{nesting} which allows visualizing in higher dimensions as we will later see in the examples section. Over and above, there are statistics for making summary statistical operations on data, such as \\textit{mean}, \\textit{std}, and so on. An expansive and comprehensive description of the library illustrated in Sec \\ref{sec:brief-insight-libn}.\n\n\\bigskip\n\nThe JSOL~compiler synthesizes and combines all low-level rules and specification gained from the other layers within respect for a given data and validates the users' rules through all layers in JSOL\\space using the handler which manages default values for visualization primitives. In a wide range of examples, we will confirm how the compiler takes a tremendous advantage of the lower-level encoding and visualization layer of the library to bring a high-level specification to visualizations, and later will demonstrate how the compiler works and what are the minimum values must be given to make visual plots.\n\n\\bigskip\n\nOn the one hand, visualization systems considered a subordinate of graphical ones. Moreover, they are the responsible entities to produce and process representation of data graphically and its interaction to gain insight into the data. On the other hand, graphical systems used to generate drawings in general, offering the utmost flexibility. They also have different types (discussed in Sec. \\ref{sec:related-work}). Nevertheless, primarily, they were not tailored for visualization purposes.\n\n\n\\section{Examples}\\label{sec:exampl-results-disc}\n\\subsection{Parallel Coordinates Example}\n\\begin{figure}[tbh]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{parallel_coordinates}\n \\caption{Caption place holder}\\label{fig:parallel_cooridnates}\n\\end{figure}\nFigure~\\ref{fig:parallel_cooridnates} presents the benchmarking results, as illustrated in the previous\nsection, for all JSOL\\space visualization, the time it takes starting from page loading to\nrendering scene graphs typically faster than most of the libraries; the main reason is the\n\\textit{HTML5 Canvas} built on top of WebGL that uses graphics card as a computational booster.\n\n\\bigskip\n\nFirst and foremost, we have to load the data variable, CSV type in this situation (For JSON, specify\nthe type to be \\textit{\"json\"}). We can import from multiple data sources and execute some\noperation on them using \\textit{transformation} and \\textit{statistics} layers.\n\\begin{lstlisting}\n\"data\": [{\n \"name\": \"crimea\",\n \"values\": \"crimea-parallel.csv\",\n \"format\": {\n \"type\": \"csv\"\n }\n}]\n\\end{lstlisting}\nAfter finishing data loading, the \\textit{transformation} phase begins, since we do not need it in the situation, we specify it to be empty.\n\\begin{lstlisting}\n\"transform\": []\n\\end{lstlisting}\n\nWe then move to set \\textit{scales} used for the axes, in this example, we will use the \\textit{linear} scale as the data does not need scaling. We can use as many scales as we want.\n\\begin{minipage}[t]{0.25\\textwidth}\n\\begin{lstlisting}\n\"scales\": {\n \"name\": \"xscale\",\n \"type\": \"linear\",\n \"range\": {\n \"type\": \"range\",\n \"value\": [0, 330]\n },\n \"domain\": {\n \"data\": \"crimea\",\n \"field\": \"name\"\n }\n}\n\\end{lstlisting}\n\\end{minipage}%\n\\begin{minipage}[t]{0.25\\textwidth}\n\\begin{lstlisting}\n\"encoding\": {\n \"x\": {\n \"field\": \"wavelength\",\n \"type\": \"quantitative\",\n \"scale\": {\n \"domain\": [300,450]\n }\n },\n \"y\": {\n \"field\": \"power\",\n \"type\": \"quantitative\"\n }\n}\n\\end{lstlisting}\n\\end{minipage}\nDefining \\textit{axes} follows placing scales. An axes must have some characteristics such as:\n\\begin{itemize}\n \\item \\texttt{Type:} the type in the snippet below implemented uniquely for polar parallel coordinates. There are tons of axes types.\n \\item \\texttt{Properties:} define aesthetics of the axes; annotation sets axes' title text, position, and color.\n \\item \\texttt{Transform:} sets the transformation of axes, in our case, the axes uses the \\textit{power function} transformation.\n\\end{itemize}\nAnd as you see in comparison code snippets, the difference between JSOL~ and \\textbf{Vega-Lite}, you can see that vega-lite for each variable you define it's scale or uses the default scale for the type(quantitative in example), but if you are using JSOL~ you only need to define the scale and use it's name to invoke it on as many variables\/axes as you want, and here you can sense the power of dynamicity.\n\n\\begin{minipage}{0.3\\textwidth}\n \\lstset{framexrightmargin=-2pt, basicstyle=\\footnotesize}\n\\begin{lstlisting}{}\n\"axes\": {\n \"type\": \"coord_polar_parallel\",\n \"properties\": [{\n \"type\": \"y\",\n \"data\": \"crimea\",\n \"field\": \"economy-mpg\",\n \"orient\": \"left\",\n \"grid\": false,\n \"text\": {\n \"font\": \"10px tohma\",\n \"colour\": \"blue\"\n },\n \"annotation\": {\n \"title\": \"economy-mpg-\",\n \"position\":\n \"edge\",\n \"font\": \"10px Arial\",\n \"colour\": \" blue\"\n },\n \"transform\": {\n \"function\": \"pow\",\n \"properties\": {\n \"power\": 2,\n \"name\": \"power\"\n\t}\n }]\n}\n\\end{lstlisting}\n\\end{minipage}%\n\\begin{minipage}{0.20\\textwidth}\n \\begin{figure}[H]\n \\includegraphics[width=\\textwidth]{enhanced_parallel_coord}\n \\end{figure}\n \\begin{figure}[H]\n \\includegraphics[width=\\textwidth]{arc_polar_coord}\n \\end{figure}\n\\end{minipage}\nThe\nThe figures above reveal the power of using JSOL, both are \\cite{parallel_coord}'s parallel coordinates, but the difference as shown is that one of them is polar which is a compelling point in the library, that you can easily convert any visualization\nscene from cartesian to polar coordinate in a single command.\n\n\n\n\n\\section*{References}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}INTRODUCTION}\nElectrons in materials with open d- or f-shells possess an electronic correlation that would significantly affect the electronic structure in materials and give rise to a variety of exotic properties such as electronic topology\\cite{PhysRevB.104.235108,PhysRevB.91.125139,PhysRevLett.121.066402,PhysRevLett.122.016803,PhysRevResearch.3.013265}, magnetism\\cite{PhysRevB.97.184404, PhysRevB.95.075124,2021Electron}, and metal-insulator transitions \\cite{PhysRevLett.116.116403,PhysRevB.98.075117,PhysRevB.104.035108}. Generally, electron correlation effects are more pronounced in two-dimensional (2D) systems than in three-dimensional (3D) systems with the same chemical composition. This is because Coulomb screening, which suppresses the long-range Coulomb interaction between electrons, is inhibited by dimensionality reduction in 2D systems. Therefore, it is easier to tune the electronic correlation strength of 2D systems experimentally, e.g., through the use of\nstructured substrate \\cite{2017Coulomb,2019A,PhysRevLett.123.206403} or the growth of multilayer films with different thicknesses\\cite{2021Anomalous}. On the other hand, the properties developed by Coulomb engineering have been theoretically revelled in 2D materials\\cite{PhysRevB.105.115115,PhysRevB.104.085149,PhysRevB.98.075117,PhysRevB.104.035108}. For example, a coexisting quantum anomalous Hall insulation state in the VSi$_2$P$_4$ monolayer has been proposed with tuning of the Hubbard \\textit{U}-constant in first-principles calculations \\cite{PhysRevB.104.085149}; By varying the Coulomb repulsion \\textit{U}, the location of the metal-insulator transition and the magnetism as well as the superconductivity of some 2D systems were revealed \\cite{PhysRevB.98.075117}. Thus, 2D materials provide a good playground to investigate numerous electronic correlation effects.\n\nRecently, a new class of 2D van der Waals systems, single-layer (SL) H-FeX$_2$ (X=Cl, Br, I) \\cite{2020The,hu2020concepts,ZHAO202256}, has attracted much attention. It is reported that the SL H-FeCl$_2$ belongs to ferrovalley (FV) materials \\cite{2016Concepts} with spontaneously valley polarization coupled with their ferromagnetism.\n Theoretical calculations have shown that the orientation of easy magnetization axis of SL H-FeClF would rotate from out-of-plane to in-plane \\cite{PhysRevB.105.104416} with increasing the strength of the electronic correlation (\\textit{U$_{\\textup{eff}}$}). In addition, the electronic energy bands of SL H-FeClF exhibit a noticeable evolution with \\textit{U$_{\\textup{eff}}$}, giving rise to topological phase transitions \\cite{PhysRevB.105.104416}. Intuitively, varying the strength of electronic correlations alters the local distribution of electronic valence charges on the open d-shell of Fe ions and changes the local magnetism and the strength of spin-orbital coupling (SOC). This, of course, drives the evolution of the atomic-scale magnetic anisotropy (MA) of the system. As we know, the atomic-scale MA is not only tightly related to the performance of the upper limit of the system's magnetic memory, but also in connection with the polarization of valleys in FV materials due to the magneto-valley coupling \\cite{PhysRevB.102.035412, PhysRevB.104.085149}. So, it is necessary to study the MA of SL H-FeX$_2$ (X=Cl, Br, I). Unfortunately, the evolutionary behaviour of MA with \\textit{U$_{\\textup{eff}}$} of SL H-FeX$_2$ (X=Cl, Br, I) members is unknown. In addition, since the modulation of both the electronic structure and the magnetic properties in each member of the SL H-FeX$_2$ (X=Cl, Br, I) family is all driven by the Coulomb correlation on Fe, the performance in MA of each member of the family would likely share a common trait. On the other hand, different member of SL H-FeX$_2$ (X=Cl, Br, I) has different halogen element, and these different halogen elements should, to some extent, alter the modulation of the electronic and magnetic properties of the system in different manner. The study of these concerns is clearly of great importance in both the scientific interest and the technological importance of spintronics.\n \nIn this work, we investigated the effect of electronic correlation strength on electronic structures as well as the magnetic anisotropy energy (MAE), represented by SL H-FeBr$_2$. Based on the first-principles calculations with DFT+\\textit{U} approach, We found that the trend of MAE with increasing value of \\textit{U$_{\\textup{eff}}$} imposed on Fe shows a non-monotonic evolution behavior corresponding to a flip between in-plane and out-of-plane magnetization. We suggested that this non-monotonic evolutionary behavior of MAE is attributed to the competition between the element resolved MAE contributed by Fe and Br atoms. The above-mentioned MAE behavior with \\textit{U$_{\\textup{eff}}$} and its underlying mechanism is universal for the SL H-FeX$_2$ (X=Cl, Br, I) family. In addition, the band inversion occurs in SL H-FeBr$_2$ during the rise of \\textit{U$_{\\textup{eff}}$}, leading to occurrence of topological phase transitions and quantum anomalous valley Hall (QAVH) states. Our work highlights the correlation effects on the MAE and the topological phase transition in the H-FeX$_2$ (X=Cl, Br, I) family.\n\n\n\\section{\\label{sec:level1}METHODS}\nWe performed first-principles density functional theory (DFT) calculations implemented in the Vienna \\textit{ab initio} Simulations Package (VASP) \\cite{PhysRevB.54.11169} to study all our concerns in the present work. In our theoretical treatment, the plane-wave cut-off energy was set to be 600 eV and the Brillouin zone was sampled with a $12\\times 12\\times 1$ $\\Gamma$-centered $k$-point mesh. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) realization was used for the exchange correlation functional \\cite{PhysRevLett.77.3865}. \nTo treat the effect of correlation between electrons in the 3d orbitals of Fe atoms, the Dudarev's approach of the Coulomb correction implemented in DFT+\\textit{U} scheme was applied to Fe ions in the system, where only the \\textit{U$_{\\textup{eff}}$} =$\\textit{U}-\\textit{J}$ is meaningful (Dudarev approach) \\cite{PhysRevB.57.1505}. In addition, the vdW corrections included in the DFT-D3 method was considered \\cite{2010A}.\nDuring the structural optimization, both the atomic positions and the lattice constants of each system were fully relaxed until the Hellmann-Feynman forces acting on each atom were less than $10^{-3}$ eV; The electronic convergence criteria was set to be $10^{-6}$eV. Since our concerned system is a two-dimensional nanosheet, a 20\\AA vacuum region was added along the direction perpendicular to the surface of the nanosheet was added to avoid the interaction between the periodic images. \nTo calculated the Berry curvature, the maximally localized Wannier functions (MLWFs) were constructed using the WANNIER90 package \\cite{MOSTOFI20142309}. The edge states were calculated using the iterative Green function method, which is implemented in the WannierTools package \\cite{2017WannierTools}.\n\nThe MAE is defined as the total energy difference between the in-plane ferromagnetic (FM) configuration (\\textit{E$_{\\textup{x}}$}) and the out-of-plane FM configuration (\\textit{E$_{\\textup{z}}$}), namely \n\\begin{equation}\nMAE = \\textit{E$_{\\textup{x}}$}-\\textit{E$_{\\textup{z}}$}. \n\\end{equation}\n\nHere, the positive or negative value of MAE indicates that the orientation of easy magnetization axis is along the out-of-plane or in-plane direction. Moreover, the element- and orbital-resolved MAE were calculated from the difference of SOC energies between in-plane and out-of-plane ferromagnetic configurations, i.e., \\cite{PhysRevB.101.214436}\n\\begin{equation}\n\\Delta{E}_{\\textup{SOC}} = E^{x}_{\\textup{SOC}}- E^{z}_{\\textup{SOC}},\n\\end{equation}\nwith \n\\begin{equation}\nE_{\\textup{SOC}} = \\langle \\frac{h^{2}}{2m^{2}c^{2}} \\frac{1}{r} \\frac{dV}{dr} \\hat{L} \\cdot \\hat{S} \\rangle,\n\\end{equation}\nwhere $V(r)$ is the spherical part of the effective potential inside the PAW sphere, while $\\hat{L}$ and $\\hat{S}$ are the orbital and spin angular momentum respectively. According to the second-order perturbation theory, only about 50\\% of the SOC energy difference contributes to the MAE, i.e., $\\textup{MAE} \\approx \\frac{1}{2} \\Delta{E}_{\\textup{SOC}}$ \\cite{ANTROPOV201435,PhysRevB.96.014435}, and the remaining SOC energies might be translated into both crystal-field energy and unquenched orbital moments \\cite{2011Is}. \n\n\\section{\\label{sec:level1}RESULTS AND DISCUSSION}\n\\subsection{\\label{sec:level2}Basic structural and magnetic properties}\n\n\\begin{figure}[ht]\n\\includegraphics[scale=0.27]{1.png}\n\\caption{\n(a) Top and (b) side view of the atomic structure of SL H-FeBr$_2$.(c) The splitting of 3d orbitals of Fe atom under the trigonal prismatic crystal field. The trigonal prismatic crystal structure is also shown. (d)The Brillouin zone with high-symmetry points labeled.\n }\n\\label{fig1}\n\\end{figure}\n\nThe single layer (SL) H-FeBr$_2$ is a hexagonal structure with the space group of \\textit{P$\\overline{6}$m}2 (No.187). A monolayer of Fe atoms is sandwiched by two adjacent Br monolayers and each Fe atom is surrounded by six Br atoms, forming a local FeBr$_6$ trigonal prism, as shown in Figs.\\ref{fig1}(a) and (b). Our calculations showed that the lattice constant of this system is \\textit{a}= 3.57\\AA and the bond angle of Fe-Br-Fe is $\\theta=85.3^{\\circ}$, which are in excellent agreement with the literature \\cite{2020The}. In particular, the six Br ions in each FeBr$_6$ trigonal prism provide a local crystal field on each Fe atom (seen in Fig. \\ref{fig1}(c)). This causes the 3d orbitals of the Fe atom to split into three groups in the energy landscape: the $d_{z^{2}}$ orbital (denoted as A$_1$), the degenerate $(d_{xy}, d_{x^{2}-y^{2}})$ orbitals (denoted as E$_1$), and the $(d_{xz}, d_{yz})$ orbitals (denoted as E$_2$), \n which are schematically displayed in Fig. \\ref{fig1}(c). Since the electronic configuration of the Fe$^{2+}$ ion is 3d$^{6}$, the spin-up channels of the 3d orbitals are fully occupied. For the spin-down channel, only the $d_{z^{2}}$ orbital is occupied by an electron, and the other d orbitals are empty. This gives rise to a spin magnetic moment of 4$\\mu_{B}$ at each Fe$^{2+}$ ion. \n \n \\begin{figure}[ht]\n\\includegraphics[scale=0.4]{2.png}\n\\caption{\n(a) The schematic top views of the FM, Stripy-AFM, and Zigzag-AFM magnetic configurations. The solid and open circles represent spin-up and spin-down states, respectively. \n(b) The evolution of energy difference between FM and Stripy-AFM (Zigzag-AFM) states with \\textit{U$_{\\textup{eff}}$}. The energy difference is defined as $\\Delta E = E_{\\textup{Stripy\/Zigzag}}-E_{\\textup{FM}}$.\n }\n\\label{fig2}\n\\end{figure}\n\nTo investigate the magnetic ground state of the SL H-FeBr$_2$, three types of magnetic configurations, i.e., a FM configuration, a stripy antiferromagnetic (AFM) configuration and a zigzag AFM configuration (seen in Fig.\\ref{fig2} (a)), were considered. Performing calculations of the nanosheet with each concerned magnetic configuration when\\textit{U$_{\\textup{eff}}$} imposed on Fe and varied from 0.0 to 1.8 eV, we found that the energy of the system with either the stripy AFM or the zigzag AFM is significantly higher than that with the FM configuration (Fig.\\ref{fig2}(b)), which agrees well with the previous study \\cite{2020The}.\n\n\nThe origin of the above FM state can be understood in terms of the super-exchange interaction.\nNote that the bond angle (85.3\u00b0) of Fe-Br-Fe in SL H-FeBr$_2$ is close to 90\u00b0. Combining this structural feature with the Goodenough-Kanamori-Anderson (GKA) rules \\cite{PhysRev.100.564,KANAMORI195987}, we know that the super-exchange interaction between the two nearest neighbouring Fe atoms predominately characterizes the feature of FM interaction. In addition to this FM super-exchange interaction, there is also a weak direct AFM exchange between the two nearest neighbouring Fe ions. However, its strength is usually weaker than that of the FM super-exchange interaction, due to the localization of d-orbitals on each magnetic atom.\nHence, the FM super-exchange interaction dominates the interaction between two nearest neighbouring Fe ions in this nanosheet when \\textit{U$_{\\textup{eff}}$} is small. \n\n\nAnother aspect shown in Fig. \\ref{fig2}(b) is that the energy difference between the stripy AFM and the FM configurations or between the zigzag AFM and the FM configurations (defined as $\\Delta E = E_{\\textup{Stripy\/Zigzag}}-E_{\\textup{FM}}$) decreases with increasing the value of \\textit{U$_{\\textup{eff}}$}, showing a weakening of FM coupling. When \\textit{U$_{\\textup{eff}}$} reaches 2.0eV, the Zigzag AFM configuration has a lower energy than that of the FM configuration, leading to an AFM ground state. \n\n\nIn fact, this FM-AFM transition behaviour can be understood as a consequence of the competition between the indirect FM superexchange and the direct AFM exchange. As we know, the indirect FM superexchange strength is proportional to \n $-\\frac{ t^{4}_{pd} J^{p}_{H} }{(\\Delta_{pd}+U_{d} )^{4}} $, \n with $t_{pd}$, $J^{p}_{H}, \\Delta_{pd}$ and $U_{d}$ \nrepresenting the hybridization strength between Fe-d orbitals and Br-p orbitals, the Hund's coupling strength of Br-p orbitals, the energy interval between Fe-d orbitals and Br-p orbitals, and the spin-splitting energy of Fe-d orbitals, respectively\\cite{PhysRevB.105.085129}. Meanwhile, the strength of direct AFM exchange is proportional to \n$\\frac{t_{dd}}{U_{d}}$, \nwith $t_{dd}$ being the strength of direct hybridization between d orbitals of nearest neighbouring Fe ions. \n\nWith the increase of \\textit{U$_{\\textup{eff}}$}, the spin splitting $U_{d}$ increases significantly. From the above formulas, it can be seen that the indirect FM superexchange strength is inversely proportional to the fourth power of the $U_{d}$ value, while the direct AFM exchange strength is only inversely proportional to the first power of the $U_{d}$ value. Obviously, increasing the $U_{d}$ value leads to a more pronounced decrease in the strength of the indirect FM superexchange than that of the direct AFM exchange. As a result, the energy difference between AFM and FM decreases with \\textit{U$_{\\textup{eff}}$}, giving rise to a transition between AFM and FM magnetic configurations. \n\nWe emphasize that although the energy of stripy AFM state is slightly lower than that of the FM state (for about 0.007eV\/Fe) when \\textit{U$_{\\textup{eff}}$}=2eV, the FM is the most energetically favourable state in most of the considered range (from 0.0eV to 2.0eV) of \\textit{U$_{\\textup{eff}}$}. Thus, in our following calculations, only the FM state of the system will be investigated.\n\n\\subsection{\\label{sec:level2}Electronic correlation effects on MAE}\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.27]{3.png}\n\\caption{\nThe evolution of (a) total MAE and (b) element-resolved MAE in a unit cell with \\textit{U$_{\\textup{eff}}$}.\n }\n\\label{fig4}\n\\end{figure}\n Now we turn to investigate the evolution of MAE with different correlation strength represented by \\textit{U$_{\\textup{eff}}$} in Fe. \nFigure \\ref{fig4} displays our calculated MAE as a function of \\textit{U$_{\\textup{eff}}$}. As shown in Fig \\ref{fig4}(a), the MAE value increases with increasing the value of \\textit{U$_{\\textup{eff}}$} until the \\textit{U$_{\\textup{eff}}$} value reaches 0.8 eV, followed by a rapid decrease in MAE. Therefore, by increasing the value of \\textit{U$_{\\textup{eff}}$}, the MAE varies from negative values to positive values and then to negative values again. This corresponds to the change of the magnetic state from the out-of-plane FM state to the in-plane FM state before returning to the out-of-plane FM state. The system prefers an in-plane FM state if $\\textit{U$_{\\textup{eff}}$}$\\textless0.1eV or $\\textit{U$_{\\textup{eff}}$}$\\textgreater1.1 eV, otherwise, the systems shows an out-of-plane FM state. So, there exists a perpendicular magnetic anisotropy (PMA) in the range of 0.1 eV\\textless\\textit{U$_{\\textup{eff}}$}\\textless1.1 eV. \n \nBasically, the entire MAE of a system is contributed from each atom. Thus, the element-resolved MAE as a function of \\textit{U$_{\\textup{eff}}$} is calculated, which is shown in Fig. \\ref{fig4}(b). It is remarkable that the MAE from Fe and Br elements show a completely different behavior with increasing \\textit{U$_{\\textup{eff}}$}: the MAE of Br element (denoted as Br-MAE) increases monotonically with increasing \\textit{U$_{\\textup{eff}}$}; However, the MAE of Fe (denoted as Fe-MAE) remains almost constant before \\textit{U$_{\\textup{eff}}$} reaches 0.8 eV, and when \\textit{U$_{\\textup{eff}}$}\\textgreater0.8 eV Fe-MAE decreases sharply and varies from positive values to negative values. \nApparently, the non-monotonic behaviour of the whole MAE in a unit cell could be the result of the competition of the element-resolved MAE between Fe and Br. For \\textit{U$_{\\textup{eff}}$}\\textless0.8 eV, the enhancement of Br-MAE dominates, which is responsible for the increase in the total MAE. For \\textit{U$_{\\textup{eff}}$}\\textgreater0.8 eV, the faster change of the Fe-MAE dominates and gives rise to a decrease in the total MAE. \n\n\\begin{figure*}[ht]\n\\includegraphics[scale=0.5]{4.pdf}\n\\caption{\n(a) The difference of orbital-resolved MAE of each Br atom between \\textit{U$_{\\textup{eff}}$}=0.0 eV and \\textit{U$_{\\textup{eff}}$}=2.0 eV, which is defined as $\\Delta\\textup{MAE} = {\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}= 2 eV}-{\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}= 0 eV}$. The spin-polarized DOS of Br-$p_{x}$ and Br-$p_{y}$ orbitals at (b) \\textit{U$_{\\textup{eff}}$}=0.0eV and (c) \\textit{U$_{\\textup{eff}}$}=2.0 eV is given. \n(d) The difference of orbital resolved MAE of each Fe atom between \\textit{U$_{\\textup{eff}}$}=0.8eV and \\textit{U$_{\\textup{eff}}$}=2.0 eV, which is defined as $\\Delta\\textup{MAE} = {\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}=2 eV}-{\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}=0.8 eV}$. The spin-polarized DOS of Fe-d$_{xy}$ and Fe -$d_{x^{2}-y^{2}}$ orbitals at (e) \\textit{U$_{\\textup{eff}}$}=0.8 eV and (f) \\textit{U$_{\\textup{eff}}$}=2.0 eV is given. \n}\n\\label{fig5}\n\\end{figure*}\n \nWhy does the evolution of Br-MAE behave differently than that of Fe-MAE with changing \\textit{U$_{\\textup{eff}}$}? To uncover the nature underlying this concern, we recall that based on the second-order perturbation theory, the MAE essentially correlates with the SOC interaction between the occupied and unoccupied states around the Fermi level, which is expressed as \\cite{PhysRevB.47.14932,PhysRevB.103.195402}\n\\begin{equation}\n\\begin{aligned}\n& \\textup{MAE} \\\\\n & = \\sum_{\\sigma,\\sigma'}(2\\delta_{\\sigma,\\sigma'}-1)\\xi^{2}\\sum_{o^{\\sigma},u^{\\sigma'}} \\frac{|\\langle o^{\\sigma} |\\hat{L}_{z} |u^{\\sigma'} \\rangle |^{2} - |\\langle o^{\\sigma} |\\hat{L}_{x} |u^{\\sigma'} \\rangle |^{2}}{E^{\\sigma'}_{u}-E^{\\sigma}_{o}} \n\\end{aligned}\n\\end{equation}\n\nHere $\\xi$ is the strength of SOC, $\\sigma$ and $\\sigma'$ are spin indexs. $\\hat{L}_{z}$ and $\\hat{L}_{x}$ are angular momentum operators. $|o^{\\sigma} \\rangle $ ($|u^{\\sigma'} \\rangle $) is the occupied (unoccupied) state with spin $\\sigma$ ($\\sigma'$), whose energy is $E^{\\sigma}_{o}$ ($E^{\\sigma'}_{u}$). According to the expression above,\nthe MAE is sensitive to the energy interval between occupied and unoccupied states, $ E^{\\sigma'}_{u}-E^{\\sigma}_{o}$. Thus, by tuning the value of \\textit{U$_{\\textup{eff}}$}, the value of $ E^{\\sigma'}_{u}-E^{\\sigma}_{o}$ could change, giving rise to the variation of MAE. \n\n\nFurthermore, we computed the orbital-resolved MAE for each Br atom with \\textit{U$_{\\textup{eff}}$}= 0 eV and 2 eV and for each Fe atom with \\textit{U$_{\\textup{eff}}$}=0.8 eV and 2 eV respectively (as shown in Fig. S1 in the Supplemental Material \\cite{SM}), followed by making difference $\\Delta \\textup{MAE} = {\\textup{MAE}}_{ \\textit{U$_{\\textup{eff}}$}= 2 eV}-{\\textup{MAE}}_{ \\textit{U$_{\\textup{eff}}$}= 0 eV}$ for Br and $\\Delta \\textup{MAE} = {\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}=2 eV}-{\\textup{MAE}}_{\\textit{U$_{\\textup{eff}}$}=0.8 eV}$ for Fe, as displayed in Fig.\\ref{fig5}(a) and \\ref{fig5}(d). \n\nIn the case of Br, it is observed in Fig. \\ref{fig5}(a) that as the value of \\textit{U$_{\\textup{eff}}$} increases from 0.0 eV to 2.0 eV, the $p_{x}-p_{y}$ orbital pairs make more positive contributions to the Br-MAE, while those from other orbital pairs have only minor contributions, thus only the Br $p_{x}$ and $p_{y}$ orbitals are considered. To understand the phenomenon observed above, we calculated the density of states (DOS) of spin-polarized Br-$p_{x}$ and Br-$p_{y}$ orbitals in SL H-FeBr$_{2}$ when \\textit{U$_{\\textup{eff}}$} = 0.0 eV and \\textit{U$_{\\textup{eff}}$} = 2.0 eV, which are shown in Fig. \\ref{fig5}(b) and \\ref{fig5}(c). It can be observed that the $p_{x}$ and $p_{y}$ orbitals of Br are all degenerate. Furthermore, the unoccupied states of Br near the Fermi level are the spin-down $p_{x\/y}$ states and the occupied states of Br near the Fermi level are the spin-up $p_{x\/y}$ states.\n\nWe first investigate the MAE contributed by the SOC interaction between the unoccupied spin-down Br-$p_{y}$ states (denoted as $|p^{\\downarrow}_{y,u} \\rangle$) and the occupied spin-up Br-$p_{x}$ states (denoted as $|p^{\\uparrow}_{x,o}\\rangle$), which can be expressed as:\n\n\\begin{equation}\n\\begin{aligned}\n \\textup{MAE}_{\\textup{Br-$p_{x\/y}$}} & = -\\xi ^{2} \\sum_{o,u} \\frac{|\\langle p^{\\uparrow}_{x,o}|\\hat{L_{z}}| p^{\\downarrow}_{y,u} \\rangle |^{2} -|\\langle p^{\\uparrow}_{x,o}|\\hat{L_{x}}| p^{\\downarrow}_{y,u} \\rangle |^{2}}{E^{\\downarrow}_{p_{y,u}} - E^{\\uparrow}_{p_{x,o}} }.\\\\\n\\end{aligned}\n\\end{equation}\n\nNoting that the matrix elements are \\cite{doi:10.1021\/acs.inorgchem.9b00687}\n\\begin{equation}\n \\langle p_{x} |\\hat{L_{z}} | p_{y} \\rangle = i, \n\\end{equation}\nand\n\\begin{equation}\n \\langle p_{x} |\\hat{L_{x}} | p_{y} \\rangle = 0.\n\\end{equation}\n Thus, the $\\textup{MAE}_{\\textup{Br-$p_{x\/y}$}}$ can be simplified as :\n \\begin{equation}\n \\begin{aligned}\n \\textup{MAE}_{\\textup{Br-$p_{x\/y}$}} = -\\xi ^{2} \\sum_{o,u} \\frac{1}{E^{\\downarrow}_{p_{y,u}} - E^{\\uparrow}_{p_{x,o}}}.\n \\end{aligned}\n \\end{equation}\n \nApparently, the $\\textup{MAE}_{\\textup{Br-$p_{x\/y}$}}$ contributes to the negative value of Br-MAE. Meanwhile, the amplitude of the $\\textup{MAE}_{\\textup{Br-$p_{x\/y}$}}$ is inversely proportional to the energy gap between the occupied spin-up $p_{x}$ states and the unoccupied spin-down $p_{y}$ states near the Fermi level. Here, the energy gap is approximately described as the energy difference between the two major peaks in DOS of occupied spin-up $p_{x}$ states and unoccupied spin-down $p_{y}$ states. Adjusting the value of \\textit{U$_{\\textup{eff}}$} from 0 eV to 2 eV, the aforementioned energy gap was changed from 1.51 eV to 2.09 eV, as shown in Fig. \\ref{fig5}(b) and (c). This increase in the energy gap gives rise to a smaller magnitude of negative $\\textup{MAE}_{\\textup{Br-$p_{x\/y}$}}$, which is responsible for decreasing the value of negative Br-MAE. \n\nRecall that the $p_{x}$ and $p_{y}$ orbitals of Br are degenerate, by doing the same analysis on the unoccupied spin-down $p_{x}$ states and the occupied spin-up $p_{y}$ states of Br near the Fermi level, we found that the gap width between the main peaks in the DOS corresponding to these two orbitals increase as well, resulting in a decreasing magnitude in the negative Br-MAE. Therefore, the negative Br-MAE contributes less with an increase in \\textit{U$_{\\textup{eff}}$}.\n\nFor Fe, it is shown in Fig. \\ref{fig5}(d) that as the value of \\textit{U$_{\\textup{eff}}$} increases from 0.8 eV to 2.0 eV, the $d_{xy}-d_{x^{2}-y^{2}}$ orbital pairs make more negative contributions to the Fe-MAE, and those from other orbital pairs have minor contributions, thus only $d_{x^{2}-y^{2}}$ and $d_{xy}$ orbitals of Fe are considered. To explain this phenomenon, the spin-polarized DOS of Fe-$d_{x^{2}-y^{2}}$ orbital and Fe-$d_{xy}$ orbital in SL H-FeBr$_2$ are calculated when \\textit{U$_{\\textup{eff}}$} =0.8 eV and 2.0 eV, respectively, which are shown in Figs. \\ref{fig5}(e) and (f)). \nIt is easy to find that the $d_{xy}$ and $d_{x^{2}-y^{2}}$ orbitals of Fe are all degenerate.\n\nIn the case of \\textit{U$_{\\textup{eff}}$} =0.8 eV, the occupied spin-down $d_{x^{2}-y^{2}}$ states (denoted as $|d^{\\downarrow}_{x^{2}-y^{2},o}\\rangle $) interact strongly with the unoccupied spin-down $d_{xy}$ states (denoted as $|d^{\\downarrow}_{xy,u} \\rangle$) near the Fermi level under SOC (as circled in Fig.\\ref{fig5} (e)),\nIt is noted that the orientations of the spins in these states are same. The MAE of Fe contributed by these two states is called as spin-conserved MAE, denoted as $\\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}}$. \nHere, we have\n\\begin{equation}\n\\begin{aligned}\n &\\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}} \\\\\n & = \\xi ^{2} \\sum_{o,u} \\frac{|\\langle d^{\\downarrow}_{xy,u}|\\hat{L_{z}}| d^{\\downarrow}_{x^{2}-y^{2},o} \\rangle |^{2} -|\\langle d^{\\downarrow}_{xy,u}|\\hat{L_{x}}| d^{\\downarrow}_{x^{2}-y^{2},o} \\rangle |^{2}}{E^{\\downarrow}_{d_{xy,u}} - E^{\\downarrow}_{d_{x^{2}-y^{2},o}} }\\\\\n\\end{aligned}\n\\end{equation}\nwith using the matrix elements \\cite{doi:10.1021\/acs.inorgchem.9b00687} \n\\begin{equation}\n \\langle d_{xy} |\\hat{L_{z}} | d_{x^{2}-y^{2}} \\rangle = 2i, \n\\end{equation}\nand\n\\begin{equation}\n \\langle d_{xy} |\\hat{L_{x}} | d_{x^{2}-y^{2}} \\rangle=0,\n\\end{equation}\n\nthe $\\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}}$ can be simplified as\n\\begin{equation}\n\\begin{aligned}\n \\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}} = \\xi ^{2} \\sum_{o,u} \\frac{4}{E^{\\downarrow}_{d_{xy,u}} - E^{\\downarrow}_{d_{x^{2}-y^{2},o}}}\n\\end{aligned} \n\\end{equation}\n\nClearly, the value of $\\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}}$ is positive, which contributes to the positive Fe-MAE when \\textit{U$_{\\textup{eff}}$} is about 0.8 eV. \nHowever, when \\textit{U$_{\\textup{eff}}$} reaches 2 eV, the occupied spin-down $d_{x^{2}-y^{2}}$ orbitals almost vanish on the VBM, as seen in Fig. \\ref{fig5}(f). This means that the SOC interaction between unoccupied spin-down $d_{xy}$ states and occupied spin-down $d_{x^{2}-y^{2}}$ states almost disappears at this value of \\textit{U$_{\\textup{eff}}$}. \nInstead, the unoccupied spin-down $d_{xy}$ states near the Fermi level mainly interact with the occupied spin-up $d_{x^{2}-y^{2}}$ states (denoted as $|d^{\\uparrow}_{x^{2}-y^{2},o}\\rangle $) lying deeply in the valance bands, as circled in Fig. \\ref{fig5}(f). This SOC interaction between these two states with anti-paralleled spins contributes to the Fe-MAE as well, and we denoted this MAE contribution as $\\textup{MAE}_{\\textup{Fe},\\textup{spin-flip}}$. It can be written as:\n\\begin{equation}\n\\begin{aligned}\n &\\textup{MAE}_{\\textup{Fe},\\textup{spin-flip}} \\\\\n & = -\\xi ^{2} \\sum_{o,u} \\frac{|\\langle d^{\\downarrow}_{xy,u}|\\hat{L_{z}}| d^{\\uparrow}_{x^{2}-y^{2},o} \\rangle |^{2} -|\\langle d^{\\downarrow}_{xy,u}|\\hat{L_{x}}| d^{\\uparrow}_{x^{2}-y^{2},o} \\rangle |^{2}}{E^{\\downarrow}_{d_{xy,u}} - E^{\\uparrow}_{d_{x^{2}-y^{2},o}} }\\\\\n & = -\\xi ^{2} \\sum_{o,u} \\frac{4}{E^{\\downarrow}_{d_{xy,u}} - E^{\\uparrow}_{d_{x^{2}-y^{2},o}}}\n\\end{aligned}\n\\end{equation}\n\n\\begin{figure*}[ht]\n\\includegraphics[scale=0.42]{5.png}\n\\caption{\nThe band structure of ferromagnetic SL H-FeBr$_2$ calculated with (a) spin-polarized case without including SOC, (b) out-of-plane magnetism with SOC, and (c) in-plane magnetization magnetism with SOC. In (b) and (c), the red, blue and green dots represent Fe-A$_{1}$, Fe-E$_{1}$, and Fe-E$_{2}$ orbitals, respectively. \n }\n\\label{fig3}\n\\end{figure*}\n\n\nThe value of $\\textup{MAE}_{\\textup{Fe},\\textup{spin-flip}}$ is negative, which is now responsible for the negative Fe-MAE. \nNote that the $d_{xy}$ and $d_{x^{2}-y^{2}}$ orbitals of Fe are degenerate in this case, we can do the same analysis on the occupied spin-up $d_{xy}$ states and unoccupied spin-down $d_{x^{2}-y^{2}}$ states of Fe near the Fermi level. After carefully examining the features of the DOS corresponding to these two orbitals, we found that the gap between the major peaks in the DOS becomes narrow with increasing \\textit{U$_{\\textup{eff}}$}. This directly contributes to the negative Fe-MAE value. In total, with increasing \\textit{U$_{\\textup{eff}}$}, the competition between $\\textup{MAE}_{\\textup{Fe},\\textup{spin-conserved}}$ and $\\textup{MAE}_{\\textup{Fe},\\textup{spin-flip}}$ causes the Fe-MAE to switch from positive values to negative values .\n\n\n\n\\subsection{\\label{sec:level2}Evolution of band structures with electronic correlation strength}\n\n\\begin{figure*}[ht]\n\\includegraphics[scale=0.25]{6.png}\n\\caption{\nThe band structure of SL H-FeBr$_2$ calculated at (a) \\textit{U$_{\\textup{eff}}$}=0.0 eV, (b) \\textit{U$_{\\textup{eff}}$}=0.6 eV, (c) \\textit{U$_{\\textup{eff}}$}=0.7 eV, (d) \\textit{U$_{\\textup{eff}}$}=0.8 eV, (e) \n\\textit{U$_{\\textup{eff}}$}=1.0 eV, (f)\n\\textit{U$_{\\textup{eff}}$}=1.2 eV. The red, blue and green dots represent Fe-A$_{1}$, Fe-E$_{1}$, and Fe-E$_{2}$ orbitals, respectively. \n}\n\\label{fig6}\n\\end{figure*}\n\n\n\\begin{figure*}[ht]\n\\includegraphics[scale=0.65]{7.png}\n\\caption{\nThe Berry curvature of SL H-FeBr$_2$ in the 2D Brillouin zone calculated at (a) \\textit{U$_{\\textup{eff}}$}=0eV and (b) \\textit{U$_{\\textup{eff}}$}=0.8 eV;\nBoth units of Berry curvature in (a) and (b) are Bohr$^{2}$.\n(c) The topological edge state of SL H-FeBr$_2$ along the (100) direction calculated at \\textit{U$_{\\textup{eff}}$}=0.8 eV;\n(d) The Topological phase diagram of SL H-FeBr$_2$ with varied \\textit{U$_{\\textup{eff}}$}. \n}\n\\label{fig7}\n\\end{figure*}\n\nLet us again pay our attention to Figs. \\ref{fig5}(e) and (f). It can be observed that the electronic structures associated with the Fe-3d states vary greatly with \\textit{U$_{\\textup{eff}}$}, which could be reflected in the variation in the energy bands. Therefore, we carefully calculated the band structures with different \\textit{U$_{\\textup{eff}}$} values.\n\n\n\nFor comparison, the band structure of FM SL H-FeBr$_{2}$ at \\textit{U$_{\\textup{eff}}$} = 0 eV was first examined. In this case, the spin-polarized band structure without considering the SOC is plotted in Fig. \\ref{fig3}(a). It can be clearly seen that the spin-up and spin-down channels are split, where the spin-down component dominates near the Fermi level.\nMeanwhile, the electronic band structure with spin-down has a band gap of about 0.253 eV. Therefore, this system characterizes the FM half-semiconductor. In addition, the energy values of the valence band maximum (VBM) at K+ and K- are equal. This scenario also occurs at the conduction band minimum (CBM) at the two k-points. Thus, there are degenerate valleys in this energy band. This feature indicates that SL H-FeBr$_{2}$ belongs to the 2D valleytronic materials \\textbf{\\cite{2016Valleytronics}}.\n\n\n\n\nIn fact, the SOC cannot be ignored in this system, so the SOC is taken into account in the following treatment. As mentioned before, the easy axis of magnetization of this system can be in the \\textit{ab} plane or the out-of-plane. When the easy axis of magnetization is in the \\textit{ab} plane, the calculated energy bands show that the energy gap at K+ and K- are equal (Fig. \\ref{fig3}(c)), and since the CBM (VBM) at K+ and K- are degenerate, there is no spontaneous polarization of valleys. When we switch the easy axis to out-of-plane, the band gap width at the K+ point is 0.314 eV and that at the K- point is 0.191 eV (Figure \\ref{fig3}(b)), showing the spontaneous polarization of the valley. In this case the system is in the FV state. Strikingly, the difference of the band gap between K+ and K- is 123 meV, being larger than other typical FV materials such as SL H-FeCl$_{2}$ (106 meV) \\cite{hu2020concepts}, Nb$_{3}$I$_{8}$ (107 meV) \\cite{PhysRevB.102.035412}, LaBr$_{2}$ (33 meV) \\cite{2019Single} and MnPX$_{3}$ (43 meV) \\cite{doi:10.1073\/pnas.1219420110}. Apparently, the valley state strongly couples with the magnetization direction in SL H-FeBr$_{2}$, which is explained theoretically in the Appendix \\ref{app}. According to the previous reports \\cite{PhysRevB.105.104416,2020The}, such FV materials with out-of-plane magnetization potentially possess topological nontrivial states. Therefore, in our following calculations, the magnetization orientation was set to be out-of-plane.\n\nThe evolution of electronic band structures with \\textit{U$_{\\textup{eff}}$} is investigated, and the representative band structures with different \\textit{U$_{\\textup{eff}}$} values are shown in Fig. \\ref{fig6}. When \\textit{U$_{\\textup{eff}}$} increases to 0.6 eV, both the conduction bands and valence bands move towards the Fermi level, thereby reducing the band gap at the K+ and K- points. When increasing the value of \\textit{U$_{\\textup{eff}}$} to 0.7 eV, the band gap at the K+ point is still non-zero, but the band gap at the K- point is closed (as shown in Fig. \\ref{fig6}(c)). This shows that the system has Dirac-like band crossing characteristic at K- \\cite{RevModPhys.90.015001}. In this case, the system becomes the so-called half valley metal (HVM) \\cite{hu2020concepts}, which can provide a massless elementary excitation that potentially contributes to well-behaved charge transport. \n\nThe band gap closed at the K- point reopens when \\textit{U$_{\\textup{eff}}$} is slightly larger than 0.7 eV . \nInterestingly, at the K- point, the low-lying E$_{1}$ orbitals on the VBM shift to the CBM, while the high-lying A$_{1}$ orbital on the CBM shifts down to the VBM (as shown in Fig. \\ref{fig6}(d)) . Despite of this, the orbital composition of the energy band at the K+ point remains intact, forming a single-valley band-inverted state. When the value of \\textit{U$_{\\textup{eff}}$} is adjusted to about 1.0 eV, the bandgap at the K- point reopens, and the bandgap at the K+ point closes (as shown in Fig. \\ref{fig6}(e)). Compared to the case of \\textit{U$_{\\textup{eff}}$}=0.8 eV, the orbital composition of CBM and VBM at the K+ point are reversed here. Meanwhile, we found that when \\textit{U$_{\\textup{eff}}$} is larger than 1.0 eV, the conduction band (valance band) near the Fermi level is contributed only by the orbitals of Fe-E$_{1}$ (A$_{1}$) (as shown in Fig. \\ref{fig6}(f)). If the value of \\textit{U$_{\\textup{eff}}$} is further increased, Fe-A$_{1}$ orbitals and Fe-E$_{1}$ orbitals are no longer entangled with each other on the conduction band (valance band) near the Fermi level. During this disentanglement process with increasing \\textit{U$_{\\textup{eff}}$}, the SOC interaction between the occupied spin-down Fe-d$_{xy}$ (d$_{x^{2}-y^{2}}$) on the valance bands near the Fermi level and the unoccupied spin-down Fe-d$_{x^{2}-y^{2}}$ (d$_{xy}$) orbitals on the conduction bands near the Fermi level weakens, which is the origin of the rapid switch in Fe-MAE. \n\n\n\n\nTo further characterize the effect of \\textit{U$_{\\textup{eff}}$} on valley-contrasting physics, the Berry curvature along $\\textit{z}$ direction was calculated based on the Kubo formula \\cite{PhysRevLett.49.405}:\n\\begin{equation}\n \\Omega_{z} (\\textbf{k}) = -\\sum_{n}\\sum_{n\\neq n'} f_{n} \\frac{2 \\textup{Im} \\langle \\psi_{n\\textbf{k}} | v_{x} | \\psi_{n'\\textbf{k}} \\rangle \\langle \\psi_{n'\\textbf{k}} | v_{y} | \\psi_{n\\textbf{k}} \\rangle }{(E_{n}-E_{n'})^{2}}\n\\end{equation}\nHere, $f_{n}$ is the Fermi-Dirac distribution function, $E_{n}$ is the eigenvalue of Bloch state $| \\psi_{n\\textbf{k}} \\rangle $, and $v_{x\/y}$ is the vector operator. \n\n\n\nThe k-resolved Berry curvatures for \\textit{U$_{\\textup{eff}}$} = 0 eV and \\textit{U$_{\\textup{eff}}$} = 0.8 eV are plotted in Fig. \\ref{fig7}(a) and Fig. \\ref{fig7}(b), respectively. In the case of \\textit{U$_{\\textup{eff}}$} = 0 eV, the Berry curvatures at K+ and K- have peaks of opposite signs and different absolute values, which originate from the spontaneous breaking of time-reversal symmetry and space-reversal symmetry. For such a system with a single valley band inversion, the Berry curvature peaks at K+ and K- have the same sign when \\textit{U$_{\\textup{eff}}$} = 0.8 eV. In this case, the full-space integral over the Berry curvature yields a nonzero Chern number, which shows a topologically nontrivial character. To uncover the nature of this topologically nontrivial feature, we computed the edge states along the (100) direction in the system at \\textit{U$_{\\textup{eff}}$} = 0.8 eV. As shown in Fig. \\ref{fig7}(c), there is a gapless chiral edge state, which simultaneously connecting the valence band at K+ and the conduction band at K-. Such gapless chiral edge states are the fingerprints of quantum anomalous Hall (QAH) states in topologically nontrivial systems with ferromagnetism \\cite{doi:10.1146\/annurev-conmatphys-031115-011417,doi:10.1146\/annurev-conmatphys-033117-054144}. Furthermore, the anomalous Hall conductivity (AHC) is calculated by the following formula \\cite{PhysRevLett.88.207208,PhysRevLett.92.037204}:\n\\begin{equation}\n\\sigma_{xy} = \\frac{e^{2}}{h} \\int_{BZ} \\frac{d\\textbf{k} }{(2\\pi)^{2}} \\Omega_{z} (\\textbf{k}) \n\\end{equation}\n As shown in Fig. S2 in the Supplemental Material \\cite{SM}), the AHC in the energy gap is $-\\frac{e^{2}}{h}$ , confirming a QAH state with a Chern number of \\textit{C} = -1 in this system. Note that this single-valley band-inverted state with QAH effect is the so-called QAVH state \\cite{hu2020concepts}. Therefore, when the electron correlation strength is adequate, SL H-FeBr$_{2}$ can possess a QAVH state, which has great potential applications for the development of spintronic devices. \n\nIn principle, the presence of the QAVH state in the system is relevant to the value of \\textit{U$_{\\textup{eff}}$}. We therefore carefully look for the range of \\textit{U$_{\\textup{eff}}$} in which the QAVH effect appears in the system. Fig. \\ref{fig7}(d) displays the topological phase diagram in which the QAVH state is predicted to exist in the range of \\textit{U$_{\\textup{eff}}$} varying from 0.7 eV to 1.0 eV. As we know, the QAVH state can only survive when the system hosts PMA. The value of \\textit{U$_{\\textup{eff}}$} corresponding to the QAVH state ranges from 0.7 eV to 1.0 eV, which belongs to the \\textit{U$_{\\textup{eff}}$} value interval of PMA (from 0.1eV to 1.1eV). Thus, the QAVH state could naturally exist in this system without external magnetic fields. \n\n\\subsection{\\label{sec:level2}Discussions}\n \\begin{figure*}[ht]\n\\includegraphics[scale=0.25]{8.png}\n\\caption{\nThe evolution of total MAE of (a) SL H-FeCl$_2$, (b) SL H-FeBr$_2$, (c) SL H-FeI$_2$. The evolution of element-resolved MAE of (d) SL H-FeCl$_2$, (e) SL H-FeBr$_2$, (f) SL H-FeI$_2$ are also given. \n}\n\\label{fig8}\n\\end{figure*}\n\nWe are now curious if the behavior of \\textit{U$_{\\textup{eff}}$}-dependent MAE in SL H-FeBr$_2$ is present in SL H-FeX$_2$ (X = Cl, I), which are the counterparts of SL H-FeBr$_2$. To this end, the total MAEs of SL H-FeX$_2$ (X = Cl, I) in FM state were computed, which are all plotted in Figs. \\ref{fig8}(a)-(c). Obviously, as the value of \\textit{U$_{\\textup{eff}}$} increases in the considered range, all of the total MAEs of SL H-FeX$_2$ (X = Cl, I) experience sensitive change. \n\nCommonly, the total MAE of SL H-FeX$_2$ (X = Cl, I) shows a rapid decrease after \\textit{U$_{\\textup{eff}}$} reaches a critical magnitude.\nTo understand this common feature, the element resolved MAE of SL H-FeX$_2$ (X = Cl, Br, I) under different values of \\textit{U$_{\\textup{eff}}$} is plotted in Figs. \\ref{fig8}(d)-(f). It can be observed that when \\textit{U$_{\\textup{eff}}$} is small, the Fe-MAE of SL H-FeX$_2$ (X = Cl, Br, I) remains almost constant. However, after \\textit{U$_{\\textup{eff}}$} reaches a critical value, the Fe-MAE would suddenly decrease with \\textit{U$_{\\textup{eff}}$}. \nSimilar to the case in SL H-FeBr$_2$, these common features in Fe-MAE of SL H-FeX$_2$ (X = Cl, Br, I) are all originated from the disentanglement between Fe-A$_1$ and E$_1$ orbitals, which are all reflected in the evolution of their band structures with \\textit{U$_{\\textup{eff}}$} (as shown in Fig. S3 and S5 in the Supplemental Material \\cite{SM}). This disentanglement weakens the SOC interaction between occupied Fe- $d_{xy}$ ($d_{x^{2}-y^{2}}$) orbitals at VBM and unoccupied $d_{x^{2}-y^{2}}$ ($d_{xy}$ ) orbitals at CBM, giving rise to a strong decrease of the Fe-MAE. It is worth noting that if the magnetization directions of all SL H-FeX$_2$ (X = Cl, Br, I) compounds are forced to be out-of-plane, the topological phase transitions might commonly happen during the continuous tuning of \\textit{U$_{\\textup{eff}}$}, giving rise to HVM and QAVH states.\n\n\n\nIn addition to the common features mentioned above, the MAEs of different members in SL H-FeX$_{2}$ (X = Cl, Br, I) also show significant differences with \\textit{U$_{\\textup{eff}}$}. First, the MAEs of SL H-FeCl$_{2}$ and H-FeBr$_{2}$ exhibit transitions between positive and negative MAEs with increasing \\textit{U$_{\\textup{eff}}$}, which corresponds to the reversal of their magnetization behaviors between out-of-plane and in-plane, but the MAE of SL H-FeI$_{2}$ does not behave in this way. Second, when the value of \\textit{U$_{\\textup{eff}}$} is small, the MAE from X (X=Br,I) elements (denoted as X-MAE) in both SL H-FeBr$_{2}$ and H-FeI$_{2}$ increase significantly with the increase of \\textit{U$_{\\textup{eff}}$} ; as the \\textit{U$_{\\textup{eff}}$} value continues to rise, the X-MAEs (X=Br, I) of both systems increase slowly. However, the Cl-MAE of SL H-FeCl$_{2}$ is not sensitive to \\textit{U$_{\\textup{eff}}$} in the above-mentioned process of increasing \\textit{U$_{\\textup{eff}}$}, as shown in Figs. \\ref{fig8} (d)-(f). Apparently, the halogen atoms of Br and I tend to provide negative values of MAEs. Among them, the amplitude of the negative I-MAE in H-FeI$_{2}$ is larger than that of the positive Fe-MAE, so the total MAE of H-FeI$_{2}$ is always negative. However, in SL H-FeCl$_{2}$, the total MAE of H-FeCl$_{2}$ is almost contributed by Fe-MAE since Cl atoms hardly contribute to MAE. Therefore, the evolution behavior of total MAE in SL H-FeCl$_{2}$ is dominated by Fe-MAE. Based on these calculations, we found that the heavier the halogen atom, the greater the contribution of the halogen atom to the negative MAE.\n\n Finally, similar with the case of SL H-FeBr$_{2}$, in SL H-FeX$_{2}$ (X = Cl, I) with out-of-plane magnetization, we also observed the chiral edge states connecting the conduction bands and valance bands, as well as an AHC of $-\\frac{e^{2}}{h}$ lying in the energy gap (see Fig. S4 and S6 in the Supplemental Material \\cite{SM}) in specific intervals of \\textit{U$_{\\textup{eff}}$}. These characteristics confirm the QAH states with Chern numbers being \\textit{C} = -1. We emphasize that these common features of correlation-driven electronic topology stem from the Fe-3d orbitals, which dominates the low-energy states of SL H-FeX$_{2}$ (X = Cl, Br, I). \n\n\n\n\n\n\n\n\\section{\\label{sec:level1}CONCLUSION}\nWe investigated the evolution of MAE as well as electronic structures of SL H-FeBr$_2$ under varying correlation strength, quantified as \\textit{U$_{\\textup{eff}}$} imposed on Fe ions. It is found that the MAE would decrease after its increase with \\textit{U$_{\\textup{eff}}$}, and the transition between negative and positive MAE reflects the switching between out-of-plane magnetization and in-plane magnetization. This non-monotonic evolution behaviour of MAE stems from the competition of element-resolved MAE between Fe and Br. The evolution of element-resolved MAE was found to arise from the variation of spin-orbital coupling (SOC) interaction between different orbitals. Further investigation revealed that as \\textit{U$_{\\textup{eff}}$} increases, the energy bands of SL H-FeBr$_2$ at K+ and K- invert in turn, giving rise to topological phase transition, and a QAVH state with chiral edge states was predicted. By comparing the MAE evolution behaviours of different members in the SL H-FeX$_2$ (X = Cl, Br, I) family, the underlying mechanism on the universality and specificity of MAE evolution behaviour is provided. Our study has deepened the understanding of correlation-induced electronic structural transition of SL H-FeX$_2$ (X = Cl, Br, I) family , which would open new perspectives of possible spintronics and valleytronics applications on nanoelectronic devices based on these materials.\n\n\\section{\\label{sec:level1}ACKNOWLEDGEMENTS}\nThe author sincerely thanks Wenhui Duan, Yong Xu, Haowei Chen, and Zhiming Xu for helpful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Role of Random Occlusion}\n\nThe random occlusion (RO) we designed for data augmentation is similar to the random erasing (RE) \\cite{zhong2020random} and cutout \\cite{devries2017improved} methods. In the RE implementation, the target erasing area is sampled from a combination of random area and aspect ratio, which could exceed the original image height or width. Therefore, it needs to try multiple times (100 by default) to generate a reasonable region for erasing. In contrast, in our implementation of the random occlusion, a square area is used, with the size randomly sampled at most $0.8\\times width$ of the image, and randomly put in a valid location. Then the square area is filled with white pixels. Note that with a simple square area, there is no need to sample multiple times of areas and aspect ratios and check the validity, and hence the generation process is more efficient. As for the cutout method, it uses multiple square regions in fixed sizes specified by hyperparameters, but not in random. The fixed-size regions may make the cut either too small or too large, and so it is not very convenient to set.\n\nTo show their differences, in the training of QAConv, we compare these data augmentation methods as well as a baseline without any random occlusion. From the results shown in Table \\ref{tab:occlusion}, it can be observed that the three data augmentation methods generally improve the baseline which does not apply any random occlusion. Intuitively, they are useful for QAConv because random occlusion forces QAConv to learn various local correspondences, instead of only salient but easy ones. Besides, the three data augmentation methods perform comparable, with the RO implementation being slightly better. Therefore, considering also the efficiency of the RO implementation, it is adopted in the training of the proposed QAConv algorithm.\n\n\\begin{table}\n \\centering\n \\caption{Role of random occlusion.}\\label{tab:occlusion}\n \\begin{tabular}{|l|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Market$\\rightarrow$Duke} & \\multicolumn{2}{|c|}{Duke$\\rightarrow$Market} \\\\\n \\cline{2-5}\n & Rank-1 & mAP & Rank-1 & mAP \\\\\n \\hline\n QAConv without occlusion & 50.5 & 29.5 & 61.6 & 28.4\\\\\n QAConv with RE~\\cite{zhong2020random} & 51.6 & 30.6 & 62.0 & 29.8\\\\\n QAConv with cutout~\\cite{devries2017improved} & 51.6 & 30.8 & 62.6 & 30.3\\\\\n QAConv with RO & \\textbf{54.4} & \\textbf{33.6} &\\textbf{62.8} &\\textbf{31.6}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\n\\section{Complete Comparisons of Backbone Networks}\nTables \\ref{tab:duke} and \\ref{tab:market} show complete comparisons between the QAConv results with the ResNet-50 as backbone (denoted as QAConv$_{50}$) and with the ResNet-152 as backbone (denoted as QAConv$_{152}$), with DukeMTMC-reID and Market-1501 as the target datasets, respectively. Results of applying re-ranking alone are not shown in the main paper.\n\\begin{table}\n \\centering\n \\caption{Comparison (\\%) of backbone networks with DukeMTMC-reID as the target dataset.}\\label{tab:duke}\n \\begin{tabular}{|l|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c|}{Test: Duke}\\\\\n \\cline{2-5}\n & Source & Target & R1 & mAP \\\\\n \\hline\n \\hline\n QAConv$_{50}$ & Market & & 48.8 & 28.7\\\\\n QAConv$_{152}$ & Market & & 54.4 & 33.6 \\\\\n QAConv$_{50}$ + RR & Market & & 56.9 & 47.8 \\\\\n QAConv$_{152}$ + RR & Market & & 61.8 & 52.4 \\\\\n QAConv$_{50}$ + RR + TLift & Market & & 64.5 & 55.1 \\\\\n QAConv$_{152}$ + RR + TLift & Market & & 70.0 & 61.2 \\\\\n \\hline\n \\hline\n QAConv$_{50}$ & MSMT & & 69.4 & 52.6\\\\\n QAConv$_{152}$ & MSMT & & 72.2 & 53.4\\\\\n QAConv$_{50}$ + RR & MSMT & & 76.7 & 71.2 \\\\\n QAConv$_{152}$ + RR & MSMT & & 78.1 & 72.4 \\\\\n QAConv$_{50}$ + RR + TLift & MSMT & & 80.3 & 77.2 \\\\\n QAConv$_{152}$ + RR + TLift & MSMT & & 82.2 & 78.4\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}\n \\centering\n \\caption{Comparison (\\%) of backbone networks with Market-1501 as the target dataset.}\\label{tab:market}\n \\begin{tabular}{|l|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c|}{Test: Market}\\\\\n \\cline{2-5}\n & Source & Target & R1 & mAP \\\\\n \\hline\n \\hline\n QAConv$_{50}$ & Duke & & 58.6 & 27.2 \\\\\n QAConv$_{152}$ & Duke & & 62.8 & 31.6 \\\\\n QAConv$_{50}$ + RR & Duke & & 65.7 & 45.8 \\\\\n QAConv$_{152}$ + RR & Duke & & 68.5 & 51.2 \\\\\n QAConv$_{50}$ + RR + TLift & Duke & & 74.6 & 51.5 \\\\\n QAConv$_{152}$ + RR + TLift & Duke & & 78.7 & 58.2 \\\\\n \\hline\n \\hline\n QAConv$_{50}$ & MSMT & & 72.6 & 43.1 \\\\\n QAConv$_{152}$ & MSMT & & 73.9 & 46.6 \\\\\n QAConv$_{50}$ + RR & MSMT & & 77.4 & 65.6 \\\\\n QAConv$_{152}$ + RR & MSMT & & 79.2 & 69.1 \\\\\n QAConv$_{50}$ + RR + TLift & MSMT & & 86.5 & 72.2\\\\\n QAConv$_{152}$ + RR + TLift & MSMT & & 88.4 & 76.0\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\section{Comparisons to Other Losses}\n\nSince the loss of hard triplet mining \\cite{hermans2017defense} is popular in person re-identification, we further include it in the loss comparisons. Besides, we provide a further analysis on different loss configurations of the QAConv. The results are shown in Table \\ref{tab:loss} under Market$\\rightarrow$Duke, where triplet results are each with its best margin. While the mini-batch hard triplet loss does improve the softmax cross-entropy loss, it seems that it is not efficient in learning the QAConv, possibly because local matching requires large pairs to learn, as done with the proposed class memory and focal loss, but not in mini-batches. Note that focal loss is a bit aggressive in learning, but softly. However, the hard triplet loss is in fact more aggressive.\n\n\\begin{table}\n \\centering\n \\caption{Role of loss functions under Market$\\rightarrow$Duke (\\%).}\\label{tab:loss}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\multicolumn{2}{|c|}{Method} & Rank-1 & mAP \\\\\n \\hline\n \\multirow{5}{*}{\\tabincell{c}{ResNet-152}} & Softmax cross-entropy & 34.9 & 18.4\\\\\n & Softmax cross-entropy + triplet & 39.6 & 23.0\\\\\n & Arc loss~\\cite{Deng-CVPR19-ArcFace} & 35.3 & 17.1\\\\\n & Center loss~\\cite{Wen-ECCV16-CenterLoss,Jin-IJCB17-CenterLossReID} & 38.9 & 22.1\\\\\n & Class memory loss & 40.7 & 21.8\\\\\n \\hline\n \\multirow{7}{*}{\\tabincell{c}{QAConv$_{50}$}} & Mini-batch triplet (w\/o class memory) & 42.2 & 23.7\\\\\n & Softmax cross-entropy & 43.4 & 24.9\\\\\n & Binary cross-entropy & 46.1 & 27.3\\\\\n & Softmax cross-entropy + triplet & 44.3 & 24.2 \\\\\n & Binary cross-entropy + triplet & 44.7 & 23.6\\\\\n & Focal loss + triplet & 43.3 & 23.2\\\\\n & Focal loss (default) & \\textbf{48.8} & \\textbf{28.7}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\section{Fusion of Global Similarity}\n\nTo see whether fusing a global similarity branch helps improving the performance, we tried an extra global feature learning branch by performing a global average pooling on the final feature maps, and a softmax cross-entropy loss for classification. During testing, the cosine similarity computed from this global feature branch is fused to the QAConv similarity. However, after trying different weights of the two losses, the best mAP we can get is 28.4\\% under Market$\\rightarrow$Duke, with the weight 0.001 of the global branch. It is a bit worse than the default QAConv (28.7\\%). This may be because the vanilla global feature branch cannot handle misalignments and occlusions, and so more advanced techniques are needed here. This deserves a further study.\n\n\n\\section{TLift for Other Methods}\n\nNote that TLift can also be generally applied to other methods for improvements. To demonstrate this, Tables \\ref{tab:tlift-market} and \\ref{tab:tlift-duke} show results of applying TLift to all baseline methods under Market$\\rightarrow$Duke and Duke$\\rightarrow$Market, respectively. It can be observed that, beyond the improvements made by re-ranking, TLift can further improve all baseline methods. The improvements are consistently large, with Rank-1 improved by 10.1\\%-14.1\\%, and mAP improved by 3.6\\%-11.1\\%.\n\n\\begin{table}\n \\centering\n \\caption{Role of TLift under Market$\\rightarrow$Duke (\\%).}\\label{tab:tlift-market}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Original} & \\multicolumn{2}{|c|}{+ RR} & \\multicolumn{2}{|c|}{+ RR + TLift}\\\\\n \\cline{2-7}\n & Rank-1 & mAP & Rank-1 & mAP & Rank-1 & mAP\\\\\n \\hline\n Softmax cross-entropy & 34.9 & 18.4 & 41.5 & 30.5 & 51.7 & 39.7\\\\\n Arc loss~\\cite{Deng-CVPR19-ArcFace} & 35.3 & 17.1 & 39.8 & 26.3 & 51.0 & 34.8\\\\\n Center loss~\\cite{Wen-ECCV16-CenterLoss,Jin-IJCB17-CenterLossReID} & 38.9 & 22.1 & 42.5 & 31.5 & 56.6 & 42.6\\\\\n Class memory loss & 40.7 & 21.8 & 47.8 & 36.1 & 59.6 & 46.2\\\\\n QAConv & \\textbf{54.4} & \\textbf{33.6} & \\textbf{61.8} & \\textbf{52.4} & \\textbf{70.0} & \\textbf{61.2}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n \\centering\n \\caption{Role of TLift under Duke$\\rightarrow$Market (\\%).}\\label{tab:tlift-duke}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Original} & \\multicolumn{2}{|c|}{+ RR} & \\multicolumn{2}{|c|}{+ RR + TLift}\\\\\n \\cline{2-7}\n & Rank-1 & mAP & Rank-1 & mAP & Rank-1 & mAP\\\\\n \\hline\n Softmax cross-entropy & 48.5 & 21.4 & 53.2 & 33.7 & 63.3 & 38.0\\\\\n Arc loss~\\cite{Deng-CVPR19-ArcFace} & 48.9 & 21.4 & 54.5 & 34.8 & 64.8 & 39.3\\\\\n Center loss~\\cite{Wen-ECCV16-CenterLoss,Jin-IJCB17-CenterLossReID} & 48.8 & 22.0 & 52.5 & 33.3 & 63.0 & 36.9\\\\\n Class memory loss & 47.8 & 20.5 & 52.9 & 33.1 & 63.4 & 37.5\\\\\n QAConv & \\textbf{62.8} & \\textbf{31.6} & \\textbf{68.5} & \\textbf{51.2} & \\textbf{78.7} & \\textbf{58.2}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n \\centering\n \\caption{Influence of TLift parameters under Market$\\rightarrow$Duke (\\%). Bold numbers are with the default parameters.}\\label{tab:tlift-para-market}\n \\linespread{1.2}\\selectfont\n \\setlength{\\tabcolsep}{2mm}{\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\tau$ & 50 & \\textbf{100}\t& 150 &\t200\t& 250 &\t300\t& 350 &\t400\t& 450 &\t500\\\\\n \\hline\n Rank-1 & 69.3 & \\textbf{70.0} & 69.7 & 69.8 & 69.1 & 68.3 & 66.8 & 65.5 & 64.4 & 63.9 \\\\\n mAP & 60.7 & \\textbf{61.2} & 60.7 & 59.9 & 58.8 & 57.3 & 55.7 & 54.0 & 52.4 & 51.2\\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\sigma$ & 50 & 100 & 150 & \\textbf{200}\t& 250 &\t300\t& 350 &\t400\t& 450 &\t500\\\\\n \\hline\n Rank-1 & 67.4 & 69.5 & 70.4 & \\textbf{70.0} & 69.4 & 69.2 & 68.9 & 68.4 & 68.0 & 67.7\\\\\n mAP & 55.4 & 59.6 & 60.9 & \\textbf{61.2} & 61.0 & 60.8 & 60.5 & 60.1 & 59.8 & 59.5 \\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $K$ & 5 & \\textbf{10} & 15 & 20 & 30 & 40 & 50 & 100 & 150 & 200\\\\\n \\hline\n Rank-1 & 69.7 & \\textbf{70.0} & 70.2 & 70.0 & 69.4 & 68.9 & 68.2 & 67.0 & 65.5 & 64.8\\\\\n mAP & 60.8 & \\textbf{61.2} & 61.2 & 61.0 & 60.3 & 59.6 & 58.8 & 56.8 & 55.7 & 55.2\\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\alpha$ & 0.01 & 0.02 & 0.05 & 0.1 & \\textbf{0.2} & 0.3 & 0.4 & 0.5 & 0.7 & 1\\\\\n \\hline\n Rank-1 & 70.4 & 70.4 & 70.2 & 70.2 & \\textbf{70.0} & 69.4 & 69.1 & 68.6 & 68.3 & 67.5 \\\\\n mAP & 60.8 & 60.8 & 60.8 & 61.0 & \\textbf{61.2} & 61.1 & 61.0 & 60.9 & 60.4 & 59.7\\\\\n \\hline\n \\end{tabular}}\n\\end{table}\n\n\n\\section{Parameter Analysis}\n\nConsidering the memory consumption and the efficiency, the kernel size of QAConv is set to $s=1$. Parameters for TLift are $\\tau=100$, $\\sigma=200$, $K=10$, and $\\alpha=0.2$. They were fixed in all experiments after some initial tries. To understand their influence, we vary them one by one, with corresponding results shown in Tables \\ref{tab:tlift-para-market} and \\ref{tab:tlift-para-duke}. It can be observed that, the parameters are not sensitive in a broad range, so that they are easy to select. Besides, some better results can be obtained by varying parameters other than the defaults.\n\n\\begin{table}\n \\centering\n \\caption{Influence of TLift parameters under Duke$\\rightarrow$Market (\\%). Bold numbers are with the default parameters.}\\label{tab:tlift-para-duke}\n \\linespread{1.2}\\selectfont\n \\setlength{\\tabcolsep}{2mm}{\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\tau$ & 50 & \\textbf{100}\t& 150 &\t200\t& 250 &\t300\t& 350 &\t400\t& 450 &\t500\\\\\n \\hline\n Rank-1 & 76.2 & \\textbf{78.7} & 79.8 & 79.7 & 79.9 & 79.0 & 78.6 & 78.2 & 77.6 & 77.2 \\\\\n mAP & 57.2 & \\textbf{58.2} & 58.6 & 58.4 & 58.2 & 57.7 & 57.2 & 56.6 & 56.0 & 55.4\\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\sigma$ & 50 & 100\t& 150 &\t\\textbf{200}\t& 250 &\t300\t& 350 &\t400\t& 450 &\t500\\\\\n \\hline\n Rank-1 & 76.1 & 78.5 & 78.6 & \\textbf{78.7} & 78.6 & 78.1 & 78.0 & 77.9 & 77.9 & 77.6 \\\\\n mAP & 55.6 & 57.6 & 58.1 & \\textbf{58.2} & 58.5 & 58.7 & 58.8 & 59.0 & 59.1 & 59.2\\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $K$ & 5 & \\textbf{10} & 15 & 20 & 30 & 40 & 50 & 100 & 150 & 200\\\\\n \\hline\n Rank-1 & 79.6 & \\textbf{78.7} & 78.1 & 77.6 & 76.6 & 76.2 & 75.8 & 74.4 & 73.4 & 72.7\\\\\n mAP & 56.9 & \\textbf{58.2} & 58.4 & 58.3 & 58.0 & 57.9 & 57.8 & 57.3 & 56.6 & 55.9\\\\\n \\hline\n \\end{tabular}\n \\\\[5mm]\n \\begin{tabular}{|c|c c c c c c c c c c|}\n \\hline\n $\\alpha$ & 0.01 & 0.02 & 0.05 & 0.1 & \\textbf{0.2} & 0.3 & 0.4 & 0.5 & 0.7 & 1\\\\\n \\hline\n Rank-1 & 78.4 & 78.5 & 78.7 & 78.8 & \\textbf{78.7} & 78.5 & 78.0 & 77.6 & 76.5 & 75.4\\\\\n mAP & 53.8 & 54.1 & 55.0 & 56.3 & \\textbf{58.2} & 59.4 & 59.9 & 60.0 & 59.5 & 58.5\\\\\n \\hline\n \\end{tabular}}\n\\end{table}\n\n\n\\section{Memory Usage}\n\nOne drawback of QAConv is that it requires more memory to run than other methods, and it needs to store feature maps of images, rather than features, where feature maps are generally larger in size than representation features. For training on the DukeMTMC-reID, the GPU memory consumption for the QAConv is about 2.83GB, while that for the softmax baseline is about 2.78GB. They are comparable because though QAConv spends some more on class memory, it uses three layers of the ResNet-50, while the softmax baseline uses four layers. For inference, the peak GPU memory for the QAConv is about 2.3GB, while that for the softmax baseline is about 1.7GB.\n\n\n\\section*{Biography}\n\nShengcai Liao is a Lead Scientist in the Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE. He is a Senior Member of IEEE. Previously, he was an Associate Professor in the Institute of Automation, Chinese Academy of Sciences (CASIA). He received the B.S. degree in mathematics from the Sun Yat-sen University in 2005 and the Ph.D. degree from CASIA in 2010. He was a Postdoc in the Michigan State University during 2010-2012. His research interests include object detection, face recognition, and person re-identification. He has published over 100 papers, with over 11,000 citations according to Google Scholar. He was awarded the Best Student Paper in ICB 2006, ICB 2015, and CCBR 2016, and the Best Paper in ICB 2007. He was also awarded the IJCB 2014 Best Reviewer and CVPR 2019 Outstanding Reviewer. He was an Assistant Editor for the book \"Encyclopedia of Biometrics (2nd Ed.)\". He also served as Area Chairs for ICPR 2016, ICB 2016, and ICB 2018, and reviewers for ICCV, CVPR, ECCV, TPAMI, IJCV, TIP, TIFS, etc. He was the Winner of the CVPR 2017 Detection in Crowded Scenes Challenge and the ICCV 2019 NightOwls Pedestrian Detection Challenge. Homepage: \\url{https:\/\/liaosc.wordpress.com\/}\n\n~\\\\\n\nLing Shao (Senior Member, IEEE) is currently the Executive Vice President and a Provost of the Mohamed bin Zayed University of Artificial Intelligence. He is also the CEO and the Chief Scientist of the Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, United Arab Emirates. His research interests include computer vision, machine learning, and medical imaging. He is a fellow of IAPR, IET, and BCS.\n\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\nPerson re-identification is an active research topic in computer vision. It aims at finding the same person as the query image from a large volume of gallery images. With the progress in deep learning, person re-identification has been largely advanced in recent years. However, when generalization ability becomes an important concern, required by practical applications, existing methods usually lack satisfactory performance, evidenced by direct cross-dataset evaluation \\cite{yi2014deep,Hu2014Cross}. To address this, many transfer learning, domain adaptation, and unsupervised learning methods, performed on the target domain, have been proposed. However, these methods require heavy computations in deployment, limiting their application in practical scenarios where the deployment machine may have limited resources to support deep learning and users may cannot wait for a time-consuming adaptation stage. Therefore, improving the baseline model's generalization ability to support ready usage is still of urgent importance.\n\nMost existing person re-identification methods compute a fixed representation vector, also known as a feature vector, for each image, and employ a typical distance or similarity metric (e.g. Euclidean distance or cosine similarity) for image matching. Without domain adaptation or transfer learning, the learned model is fixed as is, which is not adaptable for handling various unseen scenarios. Therefore, when generalization ability is a concern, it is expected to have an adaptive ability for the given model architecture.\n\nIn this paper, we focus on generalizable and ready-to-use person re-identification, through direct cross-dataset evaluation. Beyond representation learning, we consider how to formulate query-adaptive image matching directly in deep feature maps.\nSpecifically, we treat image matching as finding local correspondences in feature maps, and construct query-adaptive convolution kernels on the fly to achieve local matching (see Fig. \\ref{fig:qaconv}). In this way, the learned model benefits from adaptive convolution kernels in the final layer, specific to each image, and the matching process and result are interpretable (see Fig. \\ref{fig:top-examples}), similar to traditional feature correspondence approaches \\cite{SIFT,Bay-CVIU-08}. Probably because finding local correspondences through query-adaptive convolution is a common process among different domains, this explicit matching is more generalizable than representation features to unseen scenarios, such as unknown misalignments, pose or viewpoint changes. We call this Query-Adaptive Convolution QAConv. To facilitate end-to-end training of this architecture, we further build a class memory module to cache feature maps of the most recent samples of each class, so as to compute image matching losses for metric learning.\n\nThrough direct cross-dataset evaluation without further transfer learning, the proposed method achieves comparable results to many transfer learning methods for person re-identification. Besides, to explore the prior spatial-temporal structure of a camera network, a model-free temporal cooccurrence based score weighting method is proposed, named Temporal Lifting (TLift).\nThis is also computed on the fly for each query image, without statistical learning of a transition time model in advance. As a result, TLift improves person re-identification to a further extent, resulting in state-of-the-art results in cross-dataset evaluations.\n\nTo summarize, the novelty of this work include (i) a new deep image matching approach with query-adaptive convolutions, along with a class memory module for end-to-end training, and (ii) a model-free temporal cooccurrence based score weighting method. The advantages of this work are also two-fold. First, the proposed image matching method is interpretable, it is well-suited in handling misalignments, pose or viewpoint changes, and it also generalizes well in unseen domains. Second, both QAConv and TLift can be computed on the fly, and they are complementary to many other methods. For example, QAConv can serve as a better pre-trained model for transfer learning, and TLift can be readily applied by most person re-identification algorithms as a post-processing step.\n\n\\section{Related Works}\nDeep learning approaches have largely advanced person re-identification in recent years~\\cite{Ye2020Survey}. However, due to limited labeled data and a big diversity in real-world surveillance,\nthese methods usually have poor generalization ability in unseen scenarios. To address this, many unsupervised domain adaption (UDA) methods have been proposed~\\cite{peng2016unsupervised,chang2019disjoint,wang2018transferable,li2018unsupervised,fan2018unsupervised,li2019unsupervised,Zhong-CVPR19-ECN,Yu-CVPR19-MAR,Yang-CVPR19-PAUL}, which show improved cross-dataset results than traditional methods, though requiring further training on the target domain. QAConv is orthogonal to transfer learning methods as it can provide a better baseline model for them (see Section \\ref{subsec:sota} and Table \\ref{tab:duke+market}).\n\nThere are many representation learning methods proposed to deal with viewpoint changes and misalignments in person re-identification, such as part-aligned feature representations~\\cite{sun2017pcb,wang2018learning,suh2018part,zhao2017deeply}, pose-adapted feature representations~\\cite{zhao2017spindle,saquib2018pose}, human parsing based representations~\\cite{kalayeh2018human}, local neighborhood matching~\\cite{ahmed2015improved,Li-CVPR-2014-DeepReID}, and attentional networks \\cite{liu2017end,qian2017multi,liu2017hydraplus,xu2017jointly,si2018dual,li2018harmonious,xu2018attention}. While these methods present high accuracy when trained and tested on the same dataset, their generalization ability to other datasets is mostly unknown. Besides, beyond representation learning, QAConv focuses on image matching via local correspondences.\n\nGeneralizable person re-identification was first studied in our previous works \\cite{yi2014deep,Hu2014Cross}, where direct cross-dataset evaluation was proposed. More recently, Song et al. \\cite{song2019generalizable} proposed a domain-invariant mapping network by meta-learning, and Jia et al. \\cite{jia2019frustratingly} applied the IBN-Net \\cite{pan2018two} to improve generalizability, while QAConv is preliminarily reported in \\cite{Liao-arXiv2019-QAConv}. QAConv is orthogonal to methods of network design, for example, it can also be applied on the IBN-Net for improvements.\n\nFor deep feature matching, Kronecker-Product Matching (KPM) \\cite{shen2018end} computes a cosine similarity map by outer product for softly aligned element-wise subtraction. Besides, Bilinear Pooling \\cite{Lin2015Bilinear,Ustinova2017bilinear,suh2018part-aligned} and Non-local Neural Networks \\cite{wang2018non} also apply the outer product for part-aligned or self-attended representation learning. Different to the above methods, QAConv is a convolutional matching method but not simply outer product especially when its kernel size $s>1$. It is explicitly designed for local correspondence matching, interpretation, and generalization, in a straightforward way without other branches.\n\nFor post-processing, re-ranking is a technique of refining matching scores, which further improves person re-identification~\\cite{liu2013pop,yu2017divide,zhong2017re,saquib2018pose}. Besides, temporal information is also a useful cue to facilitate cross-camera person re-identification~\\cite{lv2018unsupervised,wang2019spatial-temporal}. While existing methods model transition times across different cameras but encounter difficulties in complex transition time distributions, the proposed TLift method applies cooccurrence constraint within each camera to avoid estimating transition times, and it is model-free and can be computed on the fly.\n\nFor memory based loss, ECN \\cite{Zhong-CVPR19-ECN} proposed an exemplar memory which caches feature vectors of every instance for UDA. This makes the instance-level label inference convenient but limits its scalability. In contrast, class memory is independently designed \\cite{Liao-arXiv2019-QAConv}, which is more efficient working in class level.\n\n\\section{Query-adaptive Convolution}\n\\subsection{Query-adaptive Convolutional Matching}\n\nFor face recognition and person re-identification, most existing methods do not explicitly consider the relationship between two input images under matching, but instead, like classification, they treat each image independently and apply the learned model to extract a fixed feature representation. Then, image matching is simply a distance measure between two representation vectors, regardless of the direct relationship between the actual contents of the two images.\n\nIn this paper, we consider the relationship between two images, and try to formulate adaptive image matching directly in deep feature maps. Specifically, we treat image matching as finding local correspondences in feature maps, and construct query-adaptive convolution kernels on the fly to achieve local matching. As shown in Fig. \\ref{fig:qaconv} and Fig. \\ref{fig:arch}, to match two images, each image is firstly fed forward into a backbone CNN, resulting in a final feature map of size $[1, d, h, w]$, where $d$ is the number of output channels, and $h$ and $w$ are the height and width of the feature map, respectively. Then, the channel dimension of both feature maps is normalized by the $\\ell2$-norm. After that, local patches of size $[s, s]$ at every location of the query feature map are extracted, and then reorganized into $[hw, d, s, s]$ as a convolution kernel, with input channels $d$, output channels $hw$, and kernel size $[s, s]$. This acts as a query-adaptive convolution kernel, with parameters constructed on the fly from the input, in contrast to fixed convolution kernels in the learned model. Upon this, the adaptive kernel can be used to perform a convolution on another feature map, resulting in $[1, hw, h, w]$ similarities.\n\nSince feature channels are $\\ell2$-normalized, when $s=1$, the convolution in fact measures the cosine similarity at every location of the two feature maps.\nBesides, since the convolution kernel is adaptively constructed from the image content, these similarity values exactly reflect the local matching results between the two input images. Therefore, an additional global max pooling (GMP) operation will output the best local matches, and the maximum indices found by GMP indicate the best locations of local correspondences, which can be further used to interpret the matching result, as shown in Fig. \\ref{fig:top-examples}.\nNote that GMP can also be done along the $hw$ axis of the $[1, hw, h, w]$ similarity map. That is, seeking the best matches can be carried out from both sides of the images. Concatenating the output will result in a similarity vector of size $2hw$ for each pair of images.\n\n\\subsection{Network Architecture}\n\\begin{figure}\n\\begin{minipage}{58mm}\n\\centering\n\\includegraphics[width=58mm]{arch}\n\\caption{Architecture of the QAConv. GMP: global max pooling. BN: batch normalization. FC: fully connection.}\n\\label{fig:arch}\n\\end{minipage}\n\\hspace{4mm}\n\\begin{minipage}{58mm}\n\\centering\n\\includegraphics[width=58mm]{TLift}\n\\caption{Illustration of the proposed TLift approach.\n\\label{fig:TLift}\n\\end{minipage}\n\\end{figure}\nThe architecture of the proposed query-adaptive convolution method is shown in Fig. \\ref{fig:arch}, which consists of a backbone CNN, the QAConv layer for local matching, a class memory layer for training, a global max pooling layer, a BN-FC-BN block, and, finally, a similarity output by a sigmoid function for evaluation in the test phase or loss computation in the training phase.\nThe output size of the FC layer is $1$, which acts as a binary classifier or a similarity metric, indicating whether or not one pair of images belongs to the same class. The two BN (batch normalization \\cite{Ioffe-BatchNorm-ICML15}) layers are all one-dimensional. They are used to normalize the similarity output and stabilize the gradient during training.\n\n\\subsection{Class Memory and Update}\nWe propose a class memory module to facilitate the end-to-end training of the QAConv network. Specifically, a $[c, d, h, w]$ tensor buffer is registered, where $c$ is the number of classes. For each mini batch of size $b$,\nthe $[b, d, h, w]$ feature map tensor of the mini batch will be updated into the memory buffer. We use a direct assignment update strategy, that is, each $[1, d, h, w]$ sample of class $i$ from the mini batch will be assigned into location $i$ of the $[c, d, h, w]$ memory buffer.\n\nAn exponential moving average update can also be used here. However, in our experience this is inferior to the direct replacement update. There might be two reasons for this. First, the replacement update caches feature maps of the most recent samples of each class, so as to reflect the most up-to-date state of the current model for loss computation. Second, since our task is to carry out image matching with local details in feature maps for correspondences, exponential moving average may smooth the local details of samples from the same class\n\n\\subsection{Loss Function}\nWith a mini batch of size $[b, d, h, w]$ and class memory of size $[c, d, h, w]$, $b\\times c$ pairs of similarity values will be computed by QAConv after the BN-FC-BN block. We use a sigmoid function to map the similarity values into $[0,1]$, and compute the binary cross entropy loss. Since there are far more negative than positive pairs, to balance them and enable online hard example mining, we apply the focal loss \\cite{lin2017focal} to weight the binary cross entropy. That is,\n\\begin{equation}\\label{eq:loss}\n\\ell(\\theta) = -\\frac{1}{b}\\sum_{i=1}^{b}\\sum_{j=1}^{c}(1-\\hat{p}_{ij}(\\theta))^{\\gamma}log(\\hat{p}_{ij}(\\theta)),\\\\\n\\end{equation}\nwhere $\\theta$ is the network parameter,\n$\\gamma=2$ is the focusing parameter \\cite{lin2017focal}, and\n\\begin{equation}\n\\begin{aligned}\n&\\hat{p}_{ij}=\n\\begin{cases}\np_{ij} & \\mbox{if $y_{ij}=1$,}\\\\\n1-p_{ij}& \\mbox{otherwise,}\n\\end{cases}\n\\end{aligned}\n\\end{equation}\nwhere $y_{ij}=1$ indicates a positive pair, while a negative pair otherwise, and $p_{ij}\\in[0,1]$ is the sigmoid probability.\n\n\\section{Temporal Lifting}\nFor person re-identification, to explore the prior spatial-temporal structure of a camera network, usually a transition time model is learned to measure the transition probability. However, for a complex camera network and various person transition patterns, it is not easy to learn a robust transition time distribution. In contrast, in this paper a model-free temporal cooccurrence based score weighting method is proposed, which is called Temporal Lifting (TLift). TLift does not model cross-camera transition times which could be variable and complex. Instead, TLift makes use of a group of nearby persons in each single camera, and find similarities between them.\n\nFig. \\ref{fig:TLift} illustrates the idea. A basic assumption is that people nearby in one camera are likely still nearby in another camera. Therefore, their corresponding matches in other cameras can serve as pivots to enhance the weights of other nearby persons. In Fig. \\ref{fig:TLift}, $A$ is the query person. $E$ is more similar than $A'$ to $A$ in another camera. With nearby persons $B$ and $C$, and their top retrievals $B'$ and $C'$ acting as pivots, the matching score of $A'$ can be temporally lifted since it is a nearby person of $B'$ and $C'$, while the matching score of $E$ will be reduced since there is no such pivot.\n\nFormally, suppose $A$ is the query person in camera $Q$, then, the set of nearby persons to $A$ in camera $Q$ is defined as $R = \\{B | \\Delta T_{AB} < \\tau, \\forall B \\in Q\\}$,\nwhere $\\Delta T_{AB}$ is the within-camera time difference between persons $A$ and $B$, and $\\tau$ is a threshold on $\\Delta T$ to define nearby persons. Then, for each person in $R$, cross-camera person retrieval will be performed on a gallery camera $G$ by QAConv or other methods, and the overall top K retrievals for $R$ are defined as the pivot set $P$. Next, each person in $P$ acts as an ensemble point for 1D kernel density estimation on within-camera time differences in $G$, and the temporal matching probability between $A$ and any person $X$ in camera $G$ will be computed as\n\\begin{equation}\np_t(A,X) = \\frac{1}{|P|}\\sum_{B\\in P}e^{-\\frac{\\Delta T_{BX}^2}{\\sigma^2}},\n\\end{equation}\nwhere $\\sigma$ is the sensitivity parameter of the time difference. Then, this temporal probability is used to weight the similarity score of appearance models using a multiplication fusion as $p(A,X) = (p_t(A,X) + \\alpha)p_a(A,X)$,\nwhere $p_a(A,X)$ is the appearance based matching probability (e.g. by QAConv), and $\\alpha$ is a regularizer.\n\nThis way, true positives near pivots will be lifted, while hard negatives far from pivots will be suppressed. Note that this is also computed on the fly for each query image, without learning a transition time model in advance. Therefore, it does not require training data, and can be readily applied by many other person re-identification methods.\n\n\\section{Experiments}\n\\subsection{Implementation Details}\nThe proposed method is implemented in PyTorch, based upon an adapted version \\cite{zhong2018camstyle} of the open source person re-identification library (open-reid)\\footnote{https:\/\/cysu.github.io\/open-reid\/}. Person images are resized to $384\\times128$. The backbone network is the ResNet-152 \\cite{he2016resnet} pre-trained on ImageNet, unless otherwise stated. The layer3 feature map of the backbone network is used, since the size of the layer4 feature map is too small. A $1\\times1$ convolution with 128 channels is further appended to reduce the final feature map size. The batch size of samples for training is 32. The SGD optimizer is applied, with a learning rate of 0.001 for the backbone network, and 0.01 for newly added layers. They are decayed by 0.1 after 40 epochs, and the training stops at 60 epochs. The whole QAConv is end-to-end jointly trained, while class memory is updated only after the loss computation. Considering the memory consumption and the efficiency, the kernel size of QAConv is set to $s=1$. Parameters for TLift are $\\tau=100$, $\\sigma=200$, $K=10$, and $\\alpha=0.2$. They are not sensitive in a broad range, as analyzed in the Appendix.\n\n\nA random occlusion module is implemented for data augmentation, which is similar to the random erasing \\cite{zhong2020random} and cutout \\cite{devries2017improved} methods (see Appendix for comparisons). Specifically, a square area is generated with the size randomly sampled at most $0.8\\times width$ of the image. Then this square area is filled with white pixels. It is useful for QAConv because random occlusion forces QAConv to learn various local correspondences, instead of only saliency but easy ones. Beyond this, only a random horizontal flipping is used for data augmentation.\n\n\\subsection{Datasets}\nExperiments were conducted on four large person re-identification datasets, Market-1501 \\cite{zheng2015smarket}, DukeMTMC-reID \\cite{ristani2016duke,zheng2017unlabeled}, CUHK03 \\cite{Li-CVPR-2014-DeepReID}, and MSMT17~\\cite{Wei-CVPR18-PTGAN}. The Market-1501 dataset contains 32,668 images of 1501 identities captured from 6 cameras. There are 12,936 images from 751 identities for training, and 19,732 images from 750 identities for testin\n. The DukeMTMC-reID is a subset of the multi-target and multi-camera pedestrian tracking dataset DukeMTMC~\\cite{ristani2016duke}. It includes 1,812 identities and 36,411 images, where 16,522 images of 702 identities are used for training, and the remainings for test. The CUHK03 dataset includes 13,164 images of 1,360 pedestrians. We adopted the CUHK03-NP protocol provided in \\cite{zhong2017re}, where images of 767 identities were used for training, and other images of 700 identities were used for test. Besides, we used the detected subset for evaluation, which is more challenging. The MSMT17 dataset is the largest person re-identification dataset to date, which contains 4,101 identities and 126,441 images captured from 15 cameras. It is divided into a training set of 32,621 images from 1,041 identities, and a test set with the remaining images from 3,010 identities.\n\nCross-dataset evaluation was performed in these datasets, by training on the training subset of one dataset (except that in MSMT17 we used all images for training following \\cite{Yu-CVPR19-MAR,Yang-CVPR19-PAUL}), and evaluating on the test subset of another dataset. The cumulative matching characteristic (CMC) and mean Average Precision (mAP) were used as the performance evaluation metrics. All evaluations followed the single-query evaluation protocol.\n\nThe Market-1501 and DukeMTMC-reID datasets are with frame numbers available, so that it is able to evaluate the proposed TLift method. The DukeMTMC-reID dataset has a good global and continuous record of frame numbers, and it is synchronized by providing offset times. In contrast, the Market-1501 dataset has only independent frame numbers for each session of videos from each camera.\nAccordingly we simply made a cumulative frame record by assuming continuous video sessions. After that, frame numbers were converted to seconds in time by dividing the Frames Per Second (FPS) in video records, where FPS=59.94 for the DukeMTMC-reID dataset and FPS=25 for the Market-1501 dataset.\n\n\\subsection{Ablation Study}\nSome ablation studies have been conducted to understand the proposed method, in the context of direct cross-dataset evaluation between the Market-1501 and DukeMTMC-reID datasets.\nFirst, to understand the QAConv loss, several other loss functions, including the classical softmax based cross entropy loss, the center loss~\\cite{Wen-ECCV16-CenterLoss,Jin-IJCB17-CenterLossReID}, the Arc loss (derived from the ArcFace method~\\cite{Deng-CVPR19-ArcFace} which is effective for face recognition), and the proposed class memory based loss, are evaluated for comparison. For these compared loss functions, the global average pooling of layer4 (better than layer3) of the ResNet-152 is used for feature representation, and the cosine similarity measure is adopted instead of the QAConv similarity. For the class memory loss, feature vectors are cached in memory instead of learnable parameters, and the same BN layer and Eq. (\\ref{eq:loss}) are applied after calculating the cosine similarity values between mini-batch features and memory features.\n\nFrom results shown in Table \\ref{tab:loss}, it is obvious that QAConv improves existing loss functions by a large margin, with 13.7\\%-19.5\\% improvements in Rank-1, and 9.6\\%-11.1\\% in mAP. Interestingly, large margin classifiers improves the softmax cross-entropy baseline when trained on the Market-1501 dataset, but do not have such improvements when trained on DukeMTMC-reID. This is probably due to many ambiguously labeled or closely walking persons in DukeMTMC-reID (see Section \\ref{sec:discuss}), which may confuse the strict large margin training. Note that the class memory based loss only performs comparable to other existing losses, indicating that the large improvement of QAConv is mainly due to the new matching mechanism, rather than the class memory based loss function. Besides, the Arc loss published recently is one of the best face recognition method, but it does not seem to be powerful when applied in person re-identification\\footnote{We have tried different hyper parameters and reported the best results. The best margin values were found to be 0.5 on Market-1501 and 0.2 on DukeMTMC-reID.}. In our experience, the choice of loss functions does not largely influence person re-identification performance. Similar as in face recognition, existing studies \\cite{Wen-ECCV16-CenterLoss,Deng-CVPR19-ArcFace} show that new loss functions do have improvements, but cannot be regarded as significant ones over the softmax cross entropy baseline. Therefore, we may conclude that the large improvement observed here is due to the new matching scheme, instead of different loss configurations (see Appendix for more analyses).\n\\begin{table}\n \\centering\n \\caption{Role of loss functions (\\%).}\\label{tab:loss}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Market$\\rightarrow$Duke} & \\multicolumn{2}{|c|}{Duke$\\rightarrow$Market} \\\\\n \\cline{2-5}\n & Rank-1 & mAP & Rank-1 & mAP \\\\\n \\hline\n Softmax cross-entropy & 34.9 & 18.4 & 48.5 & 21.4\\\\\n Arc loss~\\cite{Deng-CVPR19-ArcFace} & 35.3 & 17.1 & 48.9 & 21.4\\\\\n Center loss~\\cite{Wen-ECCV16-CenterLoss,Jin-IJCB17-CenterLossReID} & 38.9 & 22.1 & 48.8 & 22.0\\\\\n Class memory loss & 40.7 & 21.8 & 47.8 & 20.5\\\\\n QAConv & \\textbf{54.4} & \\textbf{33.6} & \\textbf{62.8} & \\textbf{31.6}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nNext, to understand the role of re-ranking (RR), the k-reciprocal encoding method \\cite{zhong2017re} is applied upon QAConv. From results shown in Table \\ref{tab:tlift}, it can be seen that enabling re-ranking do improve the performance a lot, especially with mAP, which is increased by 18.8\\% under Market$\\rightarrow$Duke, and 19.6\\% under Duke$\\rightarrow$Market. This improvement is much more significant based on QAConv than that based on other methods as reported in \\cite{zhong2017re}. This is probably because the new QAConv matching scheme better measures the similarity between images, which benefits the reverse neighbor based re-ranking method\n\nFurthermore, based on QAConv and re-ranking, the contribution of TLift is evaluated, compared to a recent method called TFusion (TF) \\cite{Lv18-TFusion}, which is originally designed to iteratively improve transfer learning. From results shown in Table \\ref{tab:tlift}, it can be observed that employing TLift to explore temporal information further improves the results, with Rank-1 improved by 8.2\\%-10.2\\%, and mAP by 7.0\\%-8.8\\%. This improvement is complementary to re-ranking, so they can be combined. As for the existing method TFusion, it appears to be not stable, as a large improvement can be observed under Market$\\rightarrow$Duke, but little improvement can be obtained under Duke$\\rightarrow$Market, or even the mAP is clearly decreased\\footnote{Note that TFusion parameters were optimized on each dataset to get the best results, but for TLift we used fixed parameters for all datasets (see Appendix for analysis).}. This may be because TFusion is based on learning transition time distributions across cameras, which is not easy to deal with complex camera networks and person transitions as in the Market-1501 (various repeated presences per person in one camera). In contrast, the TLift method only depends on single-camera temporal information which is relatively more easy to handle. Note that TLift can also be generally applied to other methods for improvements, as shown in the Appendix. Besides, as shown in Table \\ref{tab:tlift}, directly applying TLift to QAConv without re-ranking also improves the performance a lot.\n\\begin{table}\n \\centering\n \\caption{Performance (\\%) of different post-processing methods.}\\label{tab:tlift}\n \\begin{tabular}{|l|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Market$\\rightarrow$Duke} & \\multicolumn{2}{|c|}{Duke$\\rightarrow$Market} \\\\\n \\cline{2-5}\n & Rank-1 & mAP & Rank-1 & mAP \\\\\n \\hline\n QAConv & 54.4 & 33.6 & 62.8 & 31.6 \\\\\n QAConv + TLift & 62.7 & 45.3 & 61.5 & 40.6\\\\\n QAConv + RR~\\cite{zhong2017re} & 61.8 & 52.4 & 68.5 & 51.2 \\\\\n QAConv + RR + TF~\\cite{Lv18-TFusion} & \\textbf{70.7} & \\textbf{61.9} & 68.6 & 47.2\\\\\n QAConv + RR + TLift & 70.0 & 61.2 & \\textbf{78.7} & \\textbf{58.2} \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nFinally, to understand the effect of the backbone network, the QAConv results with the ResNet-50 as backbone are also reported in Tables \\ref{tab:duke+market} and \\ref{tab:cuhk+msmt}, compared to the default ResNet-152 (denoted as QAConv$_{50}$ and QAConv$_{152}$, respectively). As can be observed, a larger network ResNet-152 does have a better performance due to its larger learning capability. It can improve the Rank-1 accuracy over the QAConv$_{50}$ by 1.3\\%-7.3\\%, and the mAP by 0.8\\%-5.5\\%. Besides, there are also consistent improvements in case of combining re-ranking and TLift. Hence, it seems that this larger network, which contains more learnable parameters, does not have the overfitting problem when equipped with QAConv. Note that, though ResNet-152 is a very large network requiring heavy computation, in practice, it can be efficiently reduced by knowledge distillation \\cite{hinton2015distilling}.\n\n\\subsection{Comparison to the State of the Arts}\\label{subsec:sota}\nThere are a great number of person re-identification methods since this is a very active research area. Here we only list recent results for comparison due to limited space. The cross-dataset evaluation results on the four datasets are listed in Tables \\ref{tab:duke+market} and \\ref{tab:cuhk+msmt}. Considering that many person re-identification methods employ the ResNet-50 network, for a fair comparison, the following analysis is based on the QAConv$_{50}$ results. Note that this paper mainly focuses on cross-dataset evaluation. Therefore, some recent methods performing unsupervised learning on the target dataset are not compared here, such as the TAUDL~\\cite{li2018unsupervised}, UTAL~\\cite{li2019unsupervised}, and UGA~\\cite{Wu-CVPR19-UGA}, and also partially due to the fact that they use single-camera target identity labels for training. There are mainly two groups of methods listed in Table \\ref{tab:duke+market}, namely unsupervised transfer learning based methods, and direct cross-dataset evaluation based methods. The first group of methods require images from the target dataset for unsupervised learning, which are not directly comparable to the second one that directly evaluates on the target dataset in consideration of real applications. The proposed QAConv method belongs to the second group. There are very few existing results of the same setting for the second group, except some baselines of other recent methods and the PN-GAN \\cite{qian2018pose} which aims at augmenting source training data by GAN. For the comparison to the transfer learning methods, we consider that QAConv can serve as a better pre-trained model for them, and computing the RR+TLift on the fly is also more efficient than training on target dataset.\n\\begin{table}\n \\centering\n \\caption{Comparison of the state-of-the-art cross-dataset evaluation results (\\%) with DukeMTMC-reID and Market-1501 as the target datasets.}\\label{tab:duke+market}\n \\begin{tabular}{|l||c|c|c|c||c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c||}{Test: Duke} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c|}{Test: Market}\\\\\n \\cline{2-9}\n & Source & Target & R1 & mAP & Source & Target & R1 & mAP \\\\\n \\hline\n \\hline\n PUL, TOMM18~\\cite{fan2018unsupervised} & Market & Duke & 30.4 & 16.4& Duke & Market & 44.7 & 20.1\\\\\t\t\t\t\t\n TJ-AIDL, CVPR18~\\cite{wang2018transferable} & Market & Duke & 44.3 & 23.0 & Duke & Market & 58.2 & 26.5\\\\\n MMFA, BMVC18~\\cite{lin2018multi} & Market & Duke & 45.3 & 24.7 & Duke &\tMarket & 56.7 & 27.4\\\\\n CFSM, AAAI19~\\cite{chang2019disjoint} & Market & Duke & 49.8 & 27.3 & Duke & Market & 61.2 & 28.3\\\\\n DECAMEL, TPAMI19~\\cite{Yu-TPAMI19-DECAMEL} &-&-&-&-& Multi & Market & 60.2 & 32.4\\\\\n PAUL, CVPR19~\\cite{Yang-CVPR19-PAUL} & Market & Duke & 56.1 & 35.7 & Duke & Market & 66.7 & 36.8\\\\\n ECN, CVPR19~\\cite{Zhong-CVPR19-ECN} & Market & Duke & 63.3 & 40.4 & Duke & Market & 75.1 & 43.0\\\\\n CDS, ICME19~\\cite{Wu-ICME19-CDS} & Market & Duke & 67.2 & 42.7 & Duke & Market & 71.6 & 39.9\\\\\n \n \n \\hline\n ECN baseline, CVPR19~\\cite{Zhong-CVPR19-ECN} & Market & & 28.9 & 14.8 & Duke & & 43.1 & 17.7\\\\\n PN-GAN, ECCV18~\\cite{qian2018pose} & Market & & 29.9 & 15.8&-&&-&-\\\\\n QAConv$_{50}$ & Market & & 48.8 & 28.7 & Duke & & 58.6 & 27.2 \\\\\n QAConv$_{152}$ & Market & & 54.4 & 33.6 & Duke & & 62.8 & 31.6 \\\\\n QAConv$_{50}$ + RR + TLift & Market & & 64.5 & 55.1 & Duke & & 74.6 & 51.5 \\\\\n QAConv$_{152}$ + RR + TLift & Market & & 70.0 & 61.2 & Duke & & 78.7 & 58.2 \\\\\n \\hline\n \\hline\n MAR, CVPR19~\\cite{Yu-CVPR19-MAR} & MSMT & Duke & 67.1 & 48.0 & MSMT & Market & 67.7 & 40.0\\\\\n PAUL, CVPR19~\\cite{Yang-CVPR19-PAUL} & MSMT & Duke & 72.0 & 53.2 & MSMT & Market & 68.5 & 40.1\\\\\n \n \\hline\n MAR baseline, CVPR19~\\cite{Yu-CVPR19-MAR} & MSMT & & 43.1 & 28.8 & MSMT & & 46.2 & 24.6\\\\\n PAUL baseline, CVPR19~\\cite{Yang-CVPR19-PAUL} & MSMT & & 65.7 & 45.6 & MSMT & & 59.3 & 31.0\\\\\n QAConv$_{50}$ & MSMT & & 69.4 & 52.6 & MSMT & & 72.6 & 43.1 \\\\\n QAConv$_{152}$ & MSMT & & 72.2 & 53.4 & MSMT & & 73.9 & 46.6 \\\\\n QAConv$_{50}$ + RR + TLift & MSMT & & 80.3 & 77.2 & MSMT & & 86.5 & 72.2\\\\\n QAConv$_{152}$ + RR + TLift & MSMT & & 82.2 & 78.4 & MSMT & & 88.4 & 76.0\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\textbf{DukeMTMC-reID dataset.} As can be observed from Table \\ref{tab:duke+market}, when trained on the Market-1501 dataset, QAConv achieves the best performance in the direct evaluation group with a large margin. When compared to transfer learning methods, QAConv also outperforms many of them except some very recent methods, indicating that QAConv enables the network to learn how to match two person images, and the learned model generalizes well in unseen domains without transfer learning. Besides, by enabling re-ranking and TLift, the proposed method achieves the best result among all except Rank-1 of CDS. Note that the re-ranking and TLift methods can also be incorporated into other methods, though. Therefore, we list their results separately. However, both of these are calculated on the fly without learning in advance, so together with QAConv, it appears that a ready-to-use method with good generalization ability can also be achieved even without further UDA, which is a nice solution considering that UDA requires heavy computation for deep learning in deployment phase.\n\nWhen trained on MSMT17, QAConv itself beats all other methods except the transfer learning method PAUL. This is also the second best result among all existing methods taking DukeMTMC-reID as the target dataset, regardless of the training source. This clearly indicates QAConv's superiority in learning from large-scale data. It is preferred in practice in the sense that, when trained with large-scale data, there may be no need to adapt the learned model in deployment.\n\n\\textbf{Market-1501 dataset.} With Market-1501 as the target dataset as shown in Table \\ref{tab:duke+market}, similarly, when trained with MSMT17, QAConv itself also achieves the best performance among others except Rank-1 of ECN. This can be considered a large advancement in cross-dataset evaluation, which is a better evaluation strategy for understanding the generalization ability of algorithms. Besides, when equipped with RR+TLift, the proposed method achieves the state of the art, with Rank-1 accuracy of 86.5\\% and mAP of 72.2\\%. Note that this comparison is not in a sense of \\textit{fair}. We would like to share that beyond many recent efforts in UDA, enlarging the training data and exploiting on-the-fly computations in re-ranking and temporal fusion may also lead to good performance in unknown domain, with the advantage of no cost in training deep models everywhere.\n\n\\textbf{CUHK03 dataset.} The CUHK03 and MSMT17 datasets present large domain gaps to others. For CUHK03, it can be observed from Table \\ref{tab:cuhk+msmt} that, with either Market-1501 or DukeMTMC-reID dataset as training set, QAConv without UDA performs better than a UDA method PUL~\\cite{fan2018unsupervised}, and fairly comparable to another recent transfer learning method CDS \\cite{Wu-ICME19-CDS}. However, all methods perform not well on the CUHK03 dataset. Only with the large MSMT17 data set as the source training data, the proposed method performs relatively better.\n\n\\textbf{MSMT17 dataset.} With the MSMT17 as target, only QAConv does not require adaptation in Table \\ref{tab:cuhk+msmt}. However, it performs better than PTGAN \\cite{Wei-CVPR18-PTGAN} and in part comparable to ECN \\cite{Zhong-CVPR19-ECN}. This further confirms the generalizability of QAConv under large domain gaps, since without UDA it is already in part comparable to the state-of-the-art UDA methods. Note that TLift is not applicable on CUHK03 and MSMT17 due to no temporal information provided.\n\\begin{table}\n \n \\caption{Comparison of the state-of-the-art cross-dataset evaluation results (\\%) with CUHK03-NP (detected) and MSMT17 as the target datasets.}\\label{tab:cuhk+msmt}\n \\hspace{-4mm}\n \\begin{tabular}{|l||c|c|c|c||c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\tabincell{c}{Method}} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c||}{Test: CUHK03} & \\multicolumn{2}{|c|}{Training} & \\multicolumn{2}{|c|}{Test: MSMT}\\\\\n \\cline{2-9}\n & Source & Target & R1 & mAP & Source & Target & R1 & mAP \\\\\n \\hline\n \\hline\n PUL, TOMM18~\\cite{fan2018unsupervised} & Market & CUHK03 & 7.6 & 7.3 & - & - & - & - \\\\\n CDS, ICME19~\\cite{Wu-ICME19-CDS} & Market & CUHK03 & 9.1 & 8.7 & - & - & - & - \\\\\n PTGAN~\\cite{Wei-CVPR18-PTGAN}, CVPR18 & - & - & - & - & Market & MSMT & 10.2 & 2.9\\\\\n ECN, CVPR19~\\cite{Zhong-CVPR19-ECN} & - & - & - & - & Market & MSMT & 25.3 & 8.5\\\\\n \\hline\n QAConv$_{50}$ & Market & & 9.9 & 8.6 & Market & & 22.6 & 7.0\\\\\n QAConv$_{152}$ & Market & & 14.1 & 11.8 & Market & & 25.6 & 8.2\\\\\n \\hline\n \\hline\n PUL, TOMM18~\\cite{fan2018unsupervised} & Duke & CUHK03 & 5.6 & 5.2 & - & - & - & - \\\\\n CDS, ICME19~\\cite{Wu-ICME19-CDS} & Duke & CUHK03 & 8.1 & 7.1 & - & - & - & - \\\\\n PTGAN~\\cite{Wei-CVPR18-PTGAN}, CVPR18 & - & - & - & - & Duke & MSMT & 11.8 & 3.3\\\\\n ECN, CVPR19~\\cite{Zhong-CVPR19-ECN} & - & - & - & - & Duke & MSMT & 30.2 & 10.2\\\\\n \\hline\n QAConv$_{50}$ & Duke & & 7.9 & 6.8 & Duke & & 29.0 & 8.9\\\\\n QAConv$_{152}$ & Duke & & 11.0 & 9.4 & Duke & & 32.7 & 10.4\\\\\n \\hline\n \\hline\n QAConv$_{50}$ & MSMT & & 25.3 & 22.6 & -& -& -& -\\\\\n QAConv$_{152}$ & MSMT & & 32.6 & 28.1 &- & -& -& -\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\subsection{Qualitative Analysis and Discussion}\n\\label{sec:discuss}\nA unique characteristic of the proposed QAConv method is its interpretation ability of the matching. Therefore, we show some qualitative matching results in Fig. \\ref{fig:samples} for a better understanding of the proposed method. The model used here is trained on the MSMT17 dataset, and the evaluations are done on the query subsets of the Market-1501 and DukeMTMC-reID datasets. Results of both positive pairs and hard negative pairs are shown. Note that only reliable correspondences with matching scores over 0.5 are shown, and the local positions are coarse due to the $24\\times8$ size of the feature map. As can be observed from Fig. \\ref{fig:samples}, the proposed method is able to find correct local correspondences for positive pairs of images, even if there are notable misalignments in both scale and position, pose\/viewpoint changes, occlusions, and mix up of other persons, thanks to the local matching mechanism of QAConv instead of global feature representations. Besides, for hard negative pairs, the matching of QAConv still appears to be mostly reasonable, by linking visually similar parts or even the same person (may be ambiguously labeled or walking closely to other persons). Note that the QAConv method gains the matching capability by automatic learning, from supervision of only class labels but not local correspondence labels\n\\begin{figure*}\n\\centering\n\\includegraphics[width=13mm]{samples\/market\/pos\/13_0,7121_0720_c5s2_061977_00-0720_c3s2_061603_00.jpg} \\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/pos\/11_0,6259_1299_c4s6_008360_00-1299_c2s3_025582_00.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/pos\/8_0,6189_1255_c4s5_051210_00-1255_c2s3_015432_00.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/pos\/10_0,6574_0776_c4s4_028210_00-0776_c1s4_018156_00.jpg}\n\\hspace{3mm}\n\\includegraphics[width=13mm]{samples\/market\/neg\/7_0,7262_1074_c1s5_007861_00-1247_c3s3_020678_00.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/neg\/8_0,6691_0060_c1s1_008501_00-0182_c5s1_034601_00.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/neg\/8_0,5749_1438_c3s3_055978_00-1485_c2s3_089827_00.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/market\/neg\/11_0,5823_0514_c4s2_068373_00-0964_c1s4_061111_00.jpg}\\\\\n0.71 \\hspace{7mm} 0.63 \\hspace{7mm} 0.62 \\hspace{7mm} 0.66\n\\hspace{10mm}\n0.73 \\hspace{7mm} 0.67 \\hspace{7mm} 0.57 \\hspace{7mm} 0.58\\\\\n(a) Positive pairs on Market-1501\n\\hspace{12mm}\n(b) Negative pairs on Market-1501\\\\\n\\vspace{3mm}\n\n\\includegraphics[width=13mm]{samples\/duke\/pos\/11_0,6435_2988_c5_f0089268-2988_c8_f0044540.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/pos\/9_0,6157_4405_c6_f0090766-4405_c7_f0092580.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/pos\/9_0,6628_4508_c8_f0083757-4508_c6_f0110908.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/pos\/11_0,6045_0295_c2_f0099605-0295_c1_f0099259.jpg}\n\\hspace{3mm}\n\\includegraphics[width=13mm]{samples\/duke\/neg\/5_0,5375_0360_c1_f0108265-0361_c5_f0110785.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/neg\/6_0,5773_4572_c8_f0105003-4573_c7_f0134670.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/neg\/9_0,5446_0612_c2_f0163295-4759_c7_f0184320.jpg}\\hspace{1mm}\n\\includegraphics[width=13mm]{samples\/duke\/neg\/6_0,4970_4519_c6_f0115726-4719_c7_f0171037.jpg}\\\\\n0.64 \\hspace{7mm} 0.62 \\hspace{7mm} 0.66 \\hspace{7mm} 0.60\n\\hspace{10mm}\n0.54 \\hspace{7mm} 0.58 \\hspace{7mm} 0.54 \\hspace{7mm} 0.50\\\\\n(c) Positive pairs on DukeMTMC-reID\n\\hspace{5mm}\n(d) Negative pairs on DukeMTMC-reID\\\\\n\\caption{Examples of qualitative matching results by the proposed QAConv method using the model trained on the MSMT17 dataset. Numbers represent similarity scores.}\\label{fig:samples}\n\\end{figure*}\n\nThe QAConv network was trained on an NVIDIA DGX-1 server, with two V100 GPU cards. With the backbone network ResNet-50, the training time of QAConv on the DukeMTMC-reID dataset was 1.22 hours. In contrast, the most efficient softmax baseline took 0.72 hour for training. For deployment, the ECN~\\cite{Zhong-CVPR19-ECN} reported 1 hour of transfer learning time with DukeMTMC-reID as target, while MAR~\\cite{Yu-CVPR19-MAR} and DECAMEL~\\cite{Yu-TPAMI19-DECAMEL} reported 10 and 35.2 hours of total learning time, respectively, compared to the ready-to-use QAConv. For inference, with the DukeMTMC-reID dataset as target, QAConv took 26 seconds for feature extraction and 26 seconds for similarity computation. In contrast, the softmax baseline took 26 seconds for feature extraction and 0.2 seconds for similarity computation. Besides, the proposed method took 303 seconds for reranking, and 67 seconds for TLift. This is still efficient, especially for RR+TLift, compared to transfer learning for deployment. Therefore, the overall solution of QAConv+RR+TLift is promising in practical applications.\n\n\nFor further analysis on memory usage, please see the Appendix. As for the TLift, it can only be applied on datasets with good time records. Though this information is easy to obtain in real surveillance, most existing person re-identification datasets do not contain it. Another drawback of TLift is that it cannot be applied to arbitrary query images beyond a camera network, though once an initial match is found, it can be used to refine the search. Besides, it cannot help when there is no nearby person with the query.\n\n\\section{Conclusion}\nIn this paper, through extensive experiments we show that the proposed QAConv method is quite promising for person matching without further transfer learning, and it has a much better generalization ability than existing baselines. Though QAConv can also be plugged into other transfer learning methods as a better pre-trained model, in practice, according to the experimental results of this paper, we suggest a ready-to-use solution which works in the following principles. First, a large-scale and diverse training data (e.g. MSMT17) is required to learn a generalizable model. Second, a larger network (e.g. ResNet-152) benefits a better overall performance, which could be further distilled into smaller ones for efficiency. Finally, score re-ranking and temporal fusion model such as TLift can be computed on the fly in deployment, which can largely improve performance and they are more efficient to use than transfer learning.\n\n\\section*{Acknowledgements}\nThis work was partly supported by the NSFC Project \\#61672521. The authors would like to thank Yanan Wang who helped producing several illustration figures in this paper, Jinchuan Xiao who optimized the TLift code, and Anna Hennig who helped proofreading the paper.\n\n\n\\clearpage\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Illustrative Examples}\n\n\\subsection{Use of Kronecker Products}\n\nThe example below illustrates the use of Kronecker products to establish the two stated properties of $P,$ to demonstrate \\eqref{miss1}, and to provide insights into the two observations.\n\\begin{example}\t\\label{ex1}\nConsider $X^{\\prime}$ of \\eqref{Xprimedef} with $N=\\{1,2,3\\}$ in the $n=3$ variables $x_1, x_2,$ and $x_3.$ The $2^3=8$ inequalities of \\eqref{RLTstep} are expressed in \\eqref{prodkron2} as \n\\begin{eqnarray} \\scriptsize\n\\left[\\begin{array}{rr|rr|rr|rr} \nU_1U_2U_3 & -U_1U_2 & -U_1U_3 & U_1 & -U_2U_3 & U_2 & U_3 & -1 \\\\ \n-U_1U_2L_3 & U_1U_2 & U_1L_3 & -U_1 & U_2L_3 & -U_2 & -L_3 & 1 \\\\ \\cline {1-8}\n-U_1L_2U_3 & U_1L_2 & U_1U_3 & -U_1 & L_2U_3 & -L_2 & -U_3 & 1 \\\\ \nU_1L_2L_3 & -U_1L_2 & -U_1L_3 & U_1 & -L_2L_3 & L_2 & L_3 & -1 \\\\ \\cline {1-8}\n-L_1U_2U_3 & L_1U_2 & L_1U_3 & -L_1 & U_2U_3 & -U_2 & -U_3 & 1 \\\\ \nL_1U_2L_3 & -L_1U_2 & -L_1L_3 & L_1 & -U_2L_3 & U_2 & L_3 & -1 \\\\ \\cline {1-8}\nL_1L_2U_3 & -L_1L_2 & -L_1U_3 & L_1 & -L_2U_3 & L_2 & U_3 & -1 \\\\ \n-L_1L_2L_3 & L_1L_2 & L_1L_3 & -L_1 & L_2L_3 & -L_2 & -L_3 & 1 \\\\\n\\end{array}\\right]\\left(\\begin{array}{cccccccc} 1 \\\\ x_3 \\\\ \\cline {1-1} x_2 \\\\ x_2x_3 \\\\ \\cline {1-1} x_1 \\\\ x_1x_3 \\\\ \\cline {1-1} x_1x_2 \\\\ x_1x_2x_3\\\\ \\end{array} \\right) \\geq \\left(\\begin{array}{cccccccc} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0\\\\ \\end{array} \\right). \\label{larger}\n\\end{eqnarray} \nThe equations of \\eqref{ca6} in nonnegative variables $\\boldsymbol{\\lambda}$ take the form\n\\begin{eqnarray} \\tiny\n\\left(\\begin{array}{cccccccc} 1 \\\\ x_3 \\\\ \\cline {1-1} x_2 \\\\ w_{23} \\\\ \\cline {1-1} x_1 \\\\ w_{13} \\\\ \\cline {1-1} w_{12} \\\\ w_{123}\\\\ \\end{array} \\right) = \\frac{1}{d_1d_2d_3} \\left[\\begin{array}{rr|rr|rr|rr} \n1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\ \nL_3 & U_3 & L_3 & U_3 & L_3 & U_3 & L_3 & U_3 \\\\ \\cline {1-8}\nL_2 & L_2 & U_2 & U_2 & L_2 & L_2 & U_2 & U_2 \\\\ \nL_2L_3 & L_2U_3 & U_2L_3 & U_2U_3 & L_2L_3 & L_2U_3 & U_2L_3 & U_2U_3 \\\\ \\cline {1-8}\nL_1 & L_1 & L_1 & L_1 & U_1 & U_1 & U_1 & U_1 \\\\ \nL_1L_3 & L_1U_3 & L_1L_3 & L_1U_3 & U_1L_3 & U_1U_3 & U_1L_3 & U_1U_3 \\\\ \\cline {1-8}\nL_1L_2 & L_1L_2 & L_1U_2 & L_1U_2 & U_1L_2 & U_1L_2 & U_1U_2 & U_1U_2 \\\\ \nL_1L_2L_3 & L_1L_2U_3 & L_1U_2L_3 & L_1U_2U_3 & U_1L_2L_3 & U_1L_2U_3 & U_1U_2L_3 & U_1U_2U_3 \\\\\n\\end{array}\\right]\\left(\\begin{array}{cccccccc} \\lambda_1 \\\\ \\lambda_2 \\\\ \\cline {1-1} \\lambda_3 \\\\ \\lambda_4 \\\\ \\cline {1-1} \\lambda_5 \\\\ \\lambda_6 \\\\ \\cline {1-1} \\lambda_7 \\\\ \\lambda_8\\\\ \\end{array} \\right), \\label{sume}\n\\end{eqnarray}\nwhere the RLT linearization step sets $w_{12}=x_1x_2,$ $w_{13}=x_1x_3,$ $w_{23}=x_2x_3,$ and $w_{123}=x_1x_2x_3.$ \n\nThere are eight extreme points to this system, with extreme point $j$ having $\\lambda_j=d_1d_2d_3$ and $\\lambda_i=0$ for $i \\neq j.$ Then the eight extreme points to $P$ of \\eqref{RLTstep2} and \\eqref{ca4} are given by the eight columns of the above $8 \\times 8$ matrix, less the first row. Consequently, we have that $P=conv(T)$ with \n\\begin{equation}\nT=\\tiny\\left\\{ \\left(\\begin{array}{c} x_1 \\\\ x_2 \\\\ x_3 \\\\ w_{12} \\\\ w_{13} \\\\ w_{23} \\\\ w_{123}\\\\ \\end{array} \\right): L_j \\leq x_j \\leq U_j \\; \\forall \\; j=1,2,3, \\; \\left(\\begin{array}{c} x_1x_2 \\\\ x_1x_3 \\\\ x_2x_3 \\\\ x_1x_2x_3 \\\\ \\end{array} \\right)=\\left(\\begin{array}{c} w_{12} \\\\ w_{13} \\\\ w_{23} \\\\ w_{123} \\\\ \\end{array} \\right)\\right\\}, \\nonumber\n\\end{equation}\nas in \\eqref{miss1}.\n\nNow consider any multilinear polynomial $\\sum_{J \\subseteq N}\\alpha_J\\prod_{j \\in J}x_j$ as found in \\mythref{obs1,obs2}, and the corresponding system in variables $\\boldsymbol{\\pi}\\in \\real^{2^n}.$ Using obvious notation, denote \\eqref{larger} by $A\\boldsymbol{v} \\geq \\boldsymbol{0}$ so that the (scaled) $8 \\times 8$ matrix of \\eqref{sume} is $A^{-1}.$ Observe that the eight inequalities of \\eqref{larger} correspond, in order, to the functions $F(K)$ of \\eqref{Fun} having $K= \\emptyset, \\{3\\}, \\{2\\}, \\{2,3\\}, \\{1\\}, \\{1,3\\}, \\{1,2\\}, \\{1,2,3\\}.$ (This order coincides with the variable indices of the products within the vector $\\boldsymbol{v}.$) Accordingly define $\\boldsymbol{\\alpha}^T=(\\alpha_{\\emptyset},\\alpha_3,\\alpha_2,\\alpha_{23},\\alpha_1,\\alpha_{13},\\alpha_{12},\\alpha_{123}),$ and $\\boldsymbol{\\pi} \\in \\real^{8}$ by $\\boldsymbol{\\pi}^T=(\\pi_{\\emptyset},\\pi_3,\\pi_2,\\pi_{23},\\pi_1,\\pi_{13},\\pi_{12},\\pi_{123})$ so that $\\boldsymbol{\\pi}^T=\\boldsymbol{\\alpha}^TA^{-1}.$ Then we have\n\\begin{equation}\n\\sum_{J \\subseteq N} \\alpha_J\\prod_{j \\in J}x_j=\\boldsymbol{\\alpha}^T\\boldsymbol{v}=(\\boldsymbol{\\alpha}^TA^{-1})(A\\boldsymbol{v})=\\boldsymbol{\\pi}^T(A\\boldsymbol{v}) =\\sum_{K \\subseteq N}\\pi_KF(K), \\label{includee}\n\\end{equation}\nwhere the four equalities follow, from left to right, from the definitions of $\\boldsymbol{\\alpha}$ and $\\boldsymbol{v},$ the multiplicative inverse of $A,$ the definition of $\\boldsymbol{\\pi},$ and the stated equivalence between the vector $A\\boldsymbol{v}$ and the functions $F(K)$ of \\eqref{Fun}. The computing of the multipliers $\\pi_K$ by \\mythref{obs2} follows from \\eqref{includee} since each entry of the vector $\\boldsymbol{\\alpha}^TA^{-1}$ corresponds to a distinct $K \\subseteq N,$ and realizes value $\\frac{1}{D_3}(\\sum_{J \\subseteq N} \\alpha_J\\prod_{j \\in J}\\hat{x}_j),$ where $\\hat{x}_j=U_j$ for all $j \\in K$ and $\\hat{x}_j=L_j$ for all $j \\notin K.$ For each such $K,$ this value is the multiplier $\\pi_K$ on the associated function $F(K).$ \\mythref{obs2} gives us, provided it is nonnegative over the extreme points of $X^{\\prime},$ that the polynomial $\\sum_{J \\subseteq N}\\alpha_J\\prod_{j \\in J}x_j$ vanishes at a point $\\hat{\\x} \\in X^{\\prime}$ if and only if $\\pi_KF(K)=0$ for all $K \\subseteq N,$ where each $F(K)$ is evaluated at $\\hat{\\x}.$ \\hfill$\\diamond$\n\\end{example}\n\n\\subsection{Convex Hull via Projection}\n\nThe example below illustrates the computation of the set $conv\\left(G^{\\prime}\\right)$ via the projection from the extended variable space $(\\x,\\boldsymbol{w},y)$ of $P_y$ onto the space of the variables $(\\x,y),$ as stated in \\mythref{equalities}. For simplicity, the sets $X^{\\prime}$ and $G^{\\prime}$ of \\eqref{Xprimedef} and \\eqref{marker} are reduced to the sets $X$ and $G$, respectively, by setting $L_j=\\ell$ and $U_j =u$ for all $j \\in N.$ The example also demonstrates the result of \\mythref{obs2} that allows for the identification of those points within $X$ at which each facet is satisfied exactly. Example~\\ref{ex2} builds upon Example~\\ref{ex1}, and will be later referenced.\n\n\\begin{example}\t\\label{ex2}\nConsider the set $G$ with $N=\\{1,2,3\\}$ in the $n=3$ variables $x_1, x_2,x_3,$ and the variable $y,$ with $y=m(\\x)=x_1x_2x_3,$. The set $P_y$ of \\eqref{handy}, whose projection onto the $(\\x,y)$ variable space gives $\\conv{G}$ as stated in \\mythref{equalities}, is expressed in matrix form below. The matrix partitioning is used to emphasize that the first eight restrictions are inequalities and the last restriction is equality.\n\\begin{align}\n&\\scriptsize\\left[\\begin{array}{rr|rr|rr|rr|r} \nu^3 & -u^2 & -u^2 & u & -u^2 & u & u & -1 & 0 \\\\ \n-\\ell u^2 & u^2 & \\ell u & -u & \\ell u & -u & -\\ell & 1 & 0 \\\\ \\cline {1-9}\n-\\ell u^2 & \\ell u & u^2 & -u & \\ell u & -\\ell & -u & 1 & 0 \\\\ \n\\ell^2 u & -\\ell u & -\\ell u & u & -\\ell^2 & \\ell & \\ell & -1 & 0 \\\\ \\cline {1-9}\n-\\ell u^2 & \\ell u & \\ell u & -\\ell & u^2 & -u & -u & 1 & 0 \\\\ \n\\ell^2 u & -\\ell u & -\\ell^2 & \\ell & -\\ell u & u & \\ell & -1 & 0 \\\\ \\cline {1-9}\n\\ell^2 u & -\\ell^2 & -\\ell u & \\ell & -\\ell u & \\ell & u & -1 & 0 \\\\ \n-\\ell^3 & \\ell^2 & \\ell^2 & -\\ell & \\ell^2 & -\\ell & -\\ell & 1 & 0 \\\\ \n\\end{array}\\right]\\left(\\begin{array}{cccccccc} 1 \\\\ x_3 \\\\ \\cline {1-1} x_2 \\\\ w_{23} \\\\ \\cline {1-1} x_1 \\\\ w_{13} \\\\ \\cline {1-1} w_{12} \\\\ w_{123} \\\\ \\end{array} \\right) \\geq \\left(\\begin{array}{cccccccc} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0 \\\\ \\cline {1-1} 0 \\\\ 0 \\\\ \\end{array} \\right)\\nonumber \\\\\n&\\scriptsize\\phantom{|}\\left[\\begin{array}{rr|rr|rr|rr|r} \n\\phantom{\\ell u^2|}0 & \\phantom{-\\ell |}0 & \\phantom{\\ell u}0 & \\phantom{-|}0\\phantom{l} & \\phantom{\\ell u}0 & \\phantom{-\\ell}0 & \\phantom{a|}0 & -1 & 1\\\\ \n\\end{array}\\right]\\phantom{|}\\left(\\begin{array}{cccccccc} \\phantom{aa}y\\phantom{aa} \\\\ \\end{array} \\right) \\phantom{|}=\\phantom{|} \\left(\\begin{array}{cccccccc} \\phantom{|}0\\phantom{|} \\\\ \\end{array} \\right) \\label{largeee}\n\\end{align}\n\n\\noindent The eight inequalities are the linearized form of \\eqref{larger} when $L_1,$ $L_2,$ and $L_3$ are set to $\\ell,$ and when $U_1,$ $U_2,$ and $U_3$ are set to $u$, while the equation is $-\\{m(\\x)\\}_L+y=0.$ Since we desire to project \\eqref{largeee} onto the space of the variables $(x_1,x_2,x_3,y),$ the projection cone takes the form\n\\begin{eqnarray} \n\\scriptsize\n\\left[\\begin{array}{rrrrrrrrr} \nu & -u & -u & u & -\\ell & \\ell & \\ell & -\\ell & 0\\\\ \nu & -u & -\\ell & \\ell & -u & u & \\ell & -\\ell & 0\\\\\nu & -\\ell & -u & \\ell & -u & \\ell & u & -\\ell & 0\\\\\n-1 & 1 & 1 & -1 & 1 & -1 & -1 & 1 & -1\\\\\n\\end{array}\\right]\\left(\\begin{array}{cccccccc} \\pi_{\\emptyset} \\\\ \\pi_3 \\\\ \\pi_2 \\\\ \\pi_{23} \\\\ \\pi_1 \\\\ \\pi_{13} \\\\ \\pi_{12} \\\\ \\pi_{123} \\\\ \\beta^{\\prime} \\\\ \\end{array} \\right) = \\left(\\begin{array}{ccc} 0 \\\\ 0 \\\\ 0\\\\ 0\\\\ \\end{array} \\right), \\label{largend}\n\\end{eqnarray}\nwhere $\\boldsymbol{\\pi}$ are the nonnegative multipliers on the eight inequality restrictions of \\eqref{largeee}, and where $\\beta^{\\prime}$ is the multiplier on the equation. When $\\ell > 0,$ there are fifteen extreme directions to this cone, with six ``trivial directions\" resulting in the six facets $x_j \\geq \\ell$ and $-x_j \\geq -u$ for $j \\in \\{1,2,3\\}.$ Specifically, setting $\\beta^{\\prime}=0$ for each $j,$ the ``trivial direction\" having $\\pi_K=1$ if $ j \\in K$ and $\\pi_K=0$ otherwise gives $x_j- \\ell \\geq 0,$ and the ``trivial direction\" having $\\pi_K=1$ if $j \\in (N-K)$ and $\\pi_K=0$ otherwise gives $u-x_j \\geq 0,$ all inequalities scaled by $\\frac{1}{(u-\\ell)^2}.$ (Clearly, for each $j \\in \\{1,2,3\\},$ the inequality $x_j - \\ell \\geq 0$ is satisfied exactly at all points $(x_1,x_2,x_3,y) \\in G$ having $x_j = \\ell,$ and the inequality $u-x_j \\geq 0$ is satisfied exactly at all points $(x_1,x_2,x_3,y) \\in G$ having $x_j = u.$) Each of the remaining nine directions is depicted as a column of the below matrix.\n\\begin{eqnarray} \n\\scriptsize\n\\left[\\begin{array}{ccccccccc} \n0 & 0 & 0 & 0 & 0 & 0 & 0 & \\ell & \\ell+2u\\\\ \n\\ell+u & \\ell & \\ell+u & \\ell & 0 & 0 & 0 & 0 & u\\\\\n\\ell & \\ell+u & 0 & 0 & \\ell+u & \\ell & 0 & 0 & u\\\\\n\\ell+u & \\ell+u & u & 0 & u & 0 & \\ell & 0 & 0\\\\\n0 & 0 & \\ell & \\ell+u & \\ell & \\ell+u & 0 & 0 & u\\\\\nu & 0 & \\ell+u & \\ell+u & 0 & u & \\ell & 0 & 0\\\\\n0 & u & 0 & u & \\ell+u & \\ell+u & \\ell & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 2\\ell+u & u & 0\\\\\n\\ell-u & \\ell-u & \\ell-u & \\ell-u & \\ell-u & \\ell-u & -\\ell+u & -\\ell+u & -\\ell+u\\\\\n\\end{array}\\right] \\nonumber\n\\end{eqnarray}\nThe set $\\conv{G}$ is then defined in terms of the $x_j - \\ell \\geq 0$ and $u-x_j \\geq 0$ restrictions for $j \\in \\{1,2,3\\},$ together with the nine facets that are listed in the first column of the below table, upon dividing each inequality by $(u-\\ell).$\n\n\\begin{table\n\\begin{center}\n\\caption{Facet and Points in $G$ where Satisfied Exactly when $\\ell >0$}\n\\label{tab1}\n\\begin{tabular}{|c | c c c |}\n\\hline\n\\vspace{-.1 in}\n& & & \\\\\n\\vspace{-.1 in}\nFacet & \\multicolumn{3}{c} {Points in $G$ where Satisfied Exactly} \\\\ \n& & & \\\\\n\\hline\n$(\\ell^2)x_1+(\\ell u)x_2+(u^2)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,\\ell,\\ell)$ &$(u,x_2,\\ell)$ &$(u,u,x_3)$ \\\\ \n$(\\ell^2)x_1+(u^2)x_2+(\\ell u)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,\\ell,\\ell)$ &$(u,x_2,u)$ &$(u,\\ell,x_3)$ \\\\\n$(\\ell u)x_1+(\\ell^2)x_2+(u^2)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,u,\\ell)$ &$(\\ell,x_2,\\ell)$ & $(u,u,x_3)$ \\\\\n$(u^2)x_1+(\\ell^2)x_2+(\\ell u)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,u,u)$ &$(\\ell,x_2,\\ell)$ & $(\\ell,u,x_3)$ \\\\\n$(\\ell u)x_1+(u^2)x_2+(\\ell^2)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,\\ell,u)$ &$(u,x_2,u)$ & $(\\ell,\\ell,x_3)$ \\\\\n$(u^2)x_1+(\\ell u)x_2+(\\ell^2)x_3-y \\geq \\ell u(\\ell+u)$ &$(x_1,u,u)$ &$(\\ell,x_2,u)$ & $(\\ell,\\ell,x_3)$ \\\\\n$-(\\ell^2)x_1-(\\ell^2)x_2-(\\ell^2)x_3+y \\geq -2\\ell^3$ &$(x_1,\\ell,\\ell)$ &$(\\ell,x_2,\\ell)$ & $(\\ell,\\ell,x_3)$ \\\\\n$-(\\ell u)x_1-(\\ell u)x_2-(\\ell u)x_3+y \\geq -\\ell u(\\ell+u)$ & \\multicolumn{3}{c|} {Any $x_i=\\ell,$ any $x_j=u, \\; i \\neq j.$} \\\\\n$-(u^2)x_1-(u^2)x_2-(u^2)x_3+y \\geq -2u^3$ &$(x_1,u,u)$ &$(u,x_2,u)$ & $(u,u,x_3)$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nBy letting $y=m(\\x)$ within each facet of Table~\\ref{tab1}, \\mythref{obs2} allows us to identify, in terms of the positive multipliers $\\boldsymbol{\\pi}$ of \\eqref{largend}, the points in $G$ where each such inequality is satisfied exactly. Definition \\eqref{Fun} gives us, for this example, that a function $F(K)$ vanishes at a point $\\hat{\\x} \\in X$ if and only if either $\\hat{x}_j=\\ell$ for some $j \\in K$ or $\\hat{x}_j=u$ for some $j \\notin K.$ The second column of Table~\\ref{tab1}, which then logically follows, lists the set of all points $(x_1,x_2,x_3,y) \\in G$ where each facet is satisfied exactly, with the value $y=x_1x_2x_3$ suppressed for simplicity. The notation $``x_j\"$ found within this table indicates that the variable $x_j$ can have $\\ell \\leq x_j \\leq u.$ Observe that, given any realization of $(x_1,x_2,x_3)$ wherein two of the variables have values at either their lower or upper bounds, the first six facets enforce $y \\leq x_1x_2x_3$ while the last three facets enforce $y \\geq x_1x_2x_3,$ ensuring that $y=x_1x_2x_3.$\n\nFor the case when $\\ell=0,$ each of the first eight facets of Table~\\ref{tab1} is twice listed by repetition, and the inequalities $x_j \\geq 0$ for $j \\in \\{1,2,3\\}$ are not facets. A statement of the five facets resulting from Table~\\ref{tab1} and those points $(x_1,x_2,x_3,y) \\in G,$ with $y=x_1x_2x_3$ suppressed for simplicity, where each facet is satisfied exactly is given in Table~\\ref{tab2}. Here, the notation $``x_j\"$ indicates that the variable $x_j$ can have $0 \\leq x_j \\leq u.$ \\hfill$\\diamond$\n\n\\begin{table\n\\begin{center}\n\\caption{Facet and Points in $G$ where Satisfied Exactly when $\\ell =0$}\n\\label{tab2}\n\\begin{tabular}{|c | c c c |}\n\\hline\n\\vspace{-.1 in}\n& & & \\\\\n\\vspace{-.1 in}\nFacet & \\multicolumn{3}{c} {Points in $G$ where Satisfied Exactly} \\\\ \n& & & \\\\\n\\hline\n$(u^2)x_3-y \\geq 0$ &$(x_1,x_2,0)$ & $(u,u,x_3)$ & \\\\ \n$(u^2)x_2-y \\geq 0$ &$(x_1,0,x_3)$ & $(u,x_2,u)$ & \\\\\n$(u^2)x_1-y \\geq 0$ &$(0,x_2,x_3)$ & $(x_1,u,u)$ & \\\\\n$y \\geq 0$ & \\multicolumn{3}{c|} {Any $x_i=0$} \\\\\n$-(u^2)x_1-(u^2)x_2-(u^2)x_3+y \\geq -2u^3$ &$(x_1,u,u)$ &$(u,x_2,u)$ & $(u,u,x_3)$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\end{example}\n\n\n\\subsection{Convex Hull for Supermodular Function}\n\n\\begin{example} \\label{ex3}\nConsider $G$ in the $n=3$ variables $x_1, x_2,x_3,$ and the variable $y,$ with $m(\\x)=x_1x_2x_3$ as in Example~\\ref{ex2}, and with $\\ell \\geq 0$. Inequalities \\eqref{rest0} hold true because $(m(\\x^k)-m(\\x^{k-1}))$ takes values $\\ell^2(u-\\ell),$ $(\\ell u)(u-\\ell),$ and $u^2(u-\\ell)$ when $k=1,$ $2,$ and $3,$ respectively. Then \\mythref{result1} is applicable, and core facet \\eqref{res1} is\n\\begin{equation}\n-\\ell u(\\ell +u)+(\\ell^2)x_1+(\\ell u)x_2+(u^2)x_3-y \\geq 0, \\label{testing1}\n\\end{equation}\nand core facets \\eqref{res2} are\n\\begin{eqnarray}\n-\\ell^2u-\\ell^2\\left(x_1+x_2+x_3-[u+2\\ell]\\right)+y &\\geq& 0,\\label{testing2}\\\\\n-\\ell u^2-\\ell u\\left(x_1+x_2+x_3-[2u+\\ell]\\right)+y &\\geq& 0,\\label{testing3}\\\\\n-u^3-u^2\\left(x_1+x_2+x_3-[3u]\\right)+y &\\geq& 0. \\label{testing4}\n\\end{eqnarray}\nFor $\\ell>0,$ $m(\\x)$ is strictly supermodular and facet \\eqref{testing1}, together with the additional five facets obtained by permuting the coefficients $\\x[\\beta],$ are the first six facets of Table~\\ref{tab1}. Facets \\eqref{testing2}--\\eqref{testing4} are the last three facets of Table~\\ref{tab1}. In addition, the six inequalities $\\ell \\leq x_j \\leq u$ for $j\\in \\{1,2,3\\}$ comprise the remaining facets of $\\conv{G}.$ For $\\ell=0,$ $m(\\x)$ is supermodular (but not strictly supermodular because $(m(\\x^k)-m(\\x^{k-1}))$ takes values $0,$ $0,$ and $u^3$ when $k=1,$ $2,$ and $3,$ respectively), and three of the six facets obtained by permuting $\\x[\\beta]$ in \\eqref{testing1} are repetitive. The three resulting facets are the first three inequalities of Table~\\ref{tab2}. Also, and consistent with Remark 1, facets \\eqref{testing2} and \\eqref{testing3} are the same. Facets \\eqref{testing2} and \\eqref{testing4} are the last two inequalities of Table~\\ref{tab2}. In addition for $\\ell=0,$ $x_j \\geq 0$ for $j \\in \\{1,2,3\\}$ are not facets.\n\nSuppose that $G$ in the $n=3$ variables $x_1, x_2,x_3,$ and the variable $y,$ is changed to have $y=m(\\x)=(2u-\\ell)(x_1x_2+x_1x_3+x_2x_3)-x_1x_2x_3,$ with arbitrary $\\ell$ and $u.$ Inequalities \\eqref{rest0} hold true because the difference between the two sides takes values $2(u-\\ell)^3$ and $(u-\\ell)^3$ when $k=1$ and $k=2,$ respectively, so that \\mythref{result1} is applicable with $m(\\x)$ strictly supermodular. Core facet \\eqref{res1} is\n\\begin{equation}\n(\\ell u)(4\\ell -5u)+(4\\ell u-3\\ell^2)x_1+\\left(2u^2-\\ell^2\\right)x_2+\\left(3u^2-2\\ell u\\right)x_3-y \\geq 0, \\nonumber\n\\end{equation}\nand core facets \\eqref{res2} are\n\\begin{eqnarray*}\n-(4\\ell u^2-\\ell^3 -\\ell^2 u)-(4\\ell u - 3\\ell^2)\\left(x_1+x_2+x_3-[u+2\\ell]\\right)+y &\\geq& 0,\\\\\n-(2\\ell u^2-2\\ell^2 u+2u^3)-(2u^2-\\ell^2)\\left(x_1+x_2+x_3-[2u+\\ell]\\right)+y &\\geq& 0,\\\\\n-(5u^3-3\\ell u^2)-(3u^2-2\\ell u)\\left(x_1+x_2+x_3-[3u]\\right)+y &\\geq& 0.\\\\\n\\end{eqnarray*}\nThere are exactly nine facets \\eqref{sum24} with $\\beta^{\\prime}= \\pm 1$ for $\\conv{G},$ including the four listed above and the five additional obtained by permuting the coefficients $\\x[\\beta]$ in the first inequality. The six inequalities $\\ell \\leq x_j \\leq u$ for $j\\in \\{1,2,3\\}$ comprise the remaining facets of $\\conv{G},$ as $\\ell 0,$ $m(\\x)$ was noted to be strictly supermodular. Then \\mythref{exactstrictsupermod} gives us inequality \\eqref{testing1} is satisfied exactly at only those points $(\\x,y) \\in G$ where $\\x$ is of the form $(x_1,\\ell,\\ell),$ $(u,x_2,\\ell),$ or $(u,u,x_3),$ matching the first inequality of Table~\\ref{tab1}. The next five inequalities of Table~\\ref{tab1}, which follow from permutations of the coefficients of \\eqref{testing1}, have the set of points at which each is satisfied exactly obtained via the same permutations. \\mythref{exactstrictsupermod} gives us that inequalities \\eqref{testing2}, \\eqref{testing3}, and \\eqref{testing4} are satisfied exactly at only those points $(\\x,y) \\in G$ where $\\x$ has, respectively, at least two entries of value $\\ell,$ at least one entry of value $u$ and at least one entry of value $\\ell,$ and at least two entries of value $u.$ These results of \\mythref{exactstrictsupermod} match the last three rows of Table~\\ref{tab1}. Also as noted in Example~\\ref{ex3}, the six inequalities $\\ell \\leq x_j \\leq u$ for $j\\in \\{1,2,3\\}$ are all facets of $\\conv{G}.$ For each of these last six facets, the set of points $(\\x,y) \\in G$ which satisfy it exactly is obvious. For $\\ell =0,$ $m(\\x)$ was noted to be supermodular, but not strictly supermodular. In this case, \\mythref{newsuper100} gives us inequality \\eqref{testing1} is satisfied exactly at only those points $(\\x,y) \\in G$ having either $x_3=0$ or $x_1=x_2=u,$ matching the first inequality of Table~\\ref{tab2}. The next two inequalities of Table~\\ref{tab2}, which follow from permutations of the coefficients of \\eqref{testing1}, have the set of points at which each is satisfied exactly obtained via the same permutations. Also as shown in Example~\\ref{ex3}, inequalities \\eqref{res2} are the last two inequalities of Table~\\ref{tab2}. Then \\mythref{newsuper100} gives us that the $y \\geq 0$ inequality of \\eqref{testing2} and the $-(u^2)x_1-(u^2)x_2-(u^2)x_3+y \\geq -2u^3$ inequality of \\eqref{testing4} are satisfied exactly at only those points $(\\x,y) \\in G$ where $\\x$ has, respectively, at least one entry of value $0,$ and at least two entries of value $u,$ matching Table~\\ref{tab2}.\\hfill$\\diamond$\n\\end{example}\n\n\n\\subsection{Exactness for Monomial}\n\n\\begin{example} \\label{ex6}\nReconsider $G$ in the $n=3$ variables $x_1,x_2,x_3,$ with $m(\\x)=5x_1x_2x_3,$ and with $-\\ell=u=2$, as found in Example~\\ref{ex4} and Table~\\ref{tab3}. Recall that each facet within Table~\\ref{tab3} has been suitably scaled to handle the coefficient $c_n=5$ found in $m(\\x)$ and the variable bounds $-\\ell=u=2.$ Therefore, consistent with the discussion at the beginning of this subsection, every point $(\\tilde{\\x},\\tilde{y})$ identified in \\mythref{finalet} as satisfying an inequality of the form \\eqref{trivial}, \\eqref{pair1}, \\eqref{pair2}, or \\eqref{dominate2} exactly for the case in which $c_n=1$ and $-\\ell=u=1$ must be scaled to $(u\\tilde{\\x},c_nu^n\\tilde{y})=(2\\tilde{\\x},40\\tilde{y}).$ The second column of Table~\\ref{tab3} gives these scaled points for each such inequality, with the value $y=5x_1x_2x_3$ suppressed for simplicity. Here, the two inequalities in the first block of constraints within Table~\\ref{tab3} are scaled \\eqref{trivial} for $\\beta^{\\prime}=-1$ and $\\beta^{\\prime}=1,$ respectively, the inequality in the second block is scaled \\eqref{pair1}, the inequality in the third block is scaled \\eqref{pair2}, and the inequalities in the fourth and fifth blocks are the three inequalities resulting from coefficient permutations of the scaled \\eqref{dominate2}, with $t=0$ and $t=1,$ respectively. \\hfill$\\diamond$\n\\end{example}\n\\section{Introduction}\n\nA polynomial is multilinear if every monomial is square-free in the sense that it is a product of a subset of variables raised to the power one. Multilinear polynomials have degree $n$ and they become linear functions when $n-1$ variables are fixed (hence the name). A multilinear polynomial can be expressed as $\\sum_{J\\subseteq N}c_{J}\\prod_{j\\in J}x_{j}$ for some $c\\in\\real^{2^{n}}$. We are interested in symmetric multilinear polynomials (SMPs) in this paper, by which we mean the polynomial\n\\begin{subequations}\n\\begin{equation}\nm(\\x) \\eq \\sum_{i=2}^n c_i \\sum_{\\substack{J \\subseteq N\\\\|J| =i}}\\prod_{j \\in J}x_j, \\label{functionm}\n\\end{equation}\nwhere we have dropped the constant and linear terms since they are inconsequential to us from the point of view of convexification. The symmetry of $m$ refers to the fact that for every $x\\in\\real^{n}$ and $\\bar{x}$ equal to a permutation of $x$, we have $m(x) = m(\\bar{x})$. Our goal is to study the convex hull of the graph of an SMP over a symmetric box in $\\real^{n}$. Namely, we study $\\conv{G}$ which is the convex hull of the set\n\\begin{equation}\nG = \\left\\{(\\x,y)\\in \\real^n \\times \\real: \\x \\in X, \\; y= m(\\x)\\right\\}, \\label{Sdef}\n\\end{equation}\nwhere $X$ is a box imposing the same lower and upper bounds (finite $\\ell < u$) on all variables,\n\\begin{equation}\t\nX = \\left\\{\\x\\in \\real^n: \\ell \\leq x_j \\leq u, \\ j \\in N\\right\\}.\t\\label{Xdef}\n\\end{equation}\n\\end{subequations}\nThroughout this paper, we use $N:= \\{1, \\ldots, n\\}$. \\akshay{Although coordinatewise scaling and translation does not break the symmetry of $m(x)$ and can reduce the box $X$ to be the unit hypercube $[0,1]^{n}$, we work with arbitrary $\\ell$ and $u$ to avoid this affine transformation which can be cumbersome to perform.\n\nThe graph $G_{p} = \\{(\\x,y)\\in B\\times\\real \\colon y = p(x)\\}$ of a general multilinear polynomial $p(x) = \\sum_{J\\subseteq N}c_{J}\\prod_{j\\in J}x_{j}$ over an arbitrary box $B\\subset\\real^{n}$ appears not only as a substructure in some important applications but also when optimizing a polynomial over binary variables \\citep{del2016polyhedral}, which is equivalent to pseudo-Boolean optimization \\citep{boros2002pseudo}, and as an intermediate set when performing factorable reformulations of general polynomial optimization problems \\citep{bao2015global,del2019impact,buchheim2008efficient}. Hence, convexification of $G_{p}$ has been the subject of many studies in the literature. It is known that this convex hull is a polytope and exponential-sized extended formulations are available \\cite{rikun1997convex,sherali1997convex,ballerstein2014extended} but an explicit description in the $(\\x,y)$-space is not known in general. Earlier studies focused on convexifying multilinear monomials \\cite{cafieri2010convex,meyer2004trilinear,ryoo2001analysis,benson2004concave}, motivated by the classical linearization of a monomial $x_{1}x_{2}\\dots x_{n}$ over $[0,1]^{n}$ \\citep{glover74lin} which defines the convex hull for the graph of the monomial \\cite{crama1993concave} and leads to the standard linearization for $G_{p}$. Separation over $G_{p}$ is NP-hard, and so different classes of valid inequalities and cutting plane procedures have been developed for use in global optimization algorithms \\citep{bao2015global,del2019impact,fomeni2015cutting,crama2017class}. There have also been many recent studies on describing $G_{p}$ or generalizations of it in the monomial space, which is obtained by adding a new variable for each monomial, under different assumptions on the structure of the polynomial \\cite{del2018multilinear,del2018running,del2016polyhedral,gupte2020bilinear,chen2020multilinear,fischer2018matroid,fischer2020matroid,luedtke2012strength,buchheim2019berge}.\n\nSymmetry has not been exploited in the rich body of literature on convexifying multilinear polynomials. The main objective of this paper is to initiate a systematic polyhedral analysis of $\\conv{G}$ by exploiting symmetry in this set. Our focus is on the minimal inequality description of this full-dimensional polytope in the original $(x,y)$-space, as opposed to the many studies in the literature about convexifying general multilinear polynomials in the monomial space. Note that there will be exponentially many monomials in $m(x)$ when $c_{i}\\neq 0$ where $i=n\/k$ for some constant $k\\ge 1$, and so a reformulation to the monomial space will not always be tractable. The symmetric structure we assume is interesting not only because it enables a thorough analysis of the convex hull and hence adds to the convexification literature, but also because it arises in many applications of combinatorial optimization \\cite{anthony2016quadratization,boros2019compact,dey2020spca,kim2019convexification} and with regards to chromatic number of graphs and other areas of combinatorics \\citep[cf.][]{eagles2020h,stanley1995symmetric}.\n} \n\n \n\\subsection{Our Contributions}\n\nThere are exponentially many facets, but symmetry of the function $m$ and of the set $X$ means that we only need to focus on a certain subset of inequalities, which we call core inequalities, and all other inequalities are generated as permutations of coefficients in core inequalities. In theory, all these inequalities can be obtained by projecting an extended formulation having $O(n^{2})$ many variables and $O(n)$ many constraints, which is much smaller in size than the exponential-sized extensions for general multilinear polynomials. However, projecting this extension is a tedious task due to the combinatorial explosion that generally occurs with the projection operation. Instead, we give several necessary conditions and some sufficient conditions for a core valid inequality to be facet-defining, and these are more tractable to verify than the conditions that come from the use of polarity since the latter require enumeration of extreme points of a polyhedron. They can be applied to certify whether a given description of $\\conv{G}$ is minimal or not. In that regard, we use our conditions to obtain explicit listing of all the facet-defining inequalities for two families of SMPs. The first family is that of supermodular SMPs and the second family is that of monomials whose lower and upper bounds are reflections of each other ($-\\ell = u > 0$).\n\nThe Reformulation-Linearization Technique (RLT) is known to convexify the graph of a general multilinear polynomial over an arbitrary box. We show that RLT also implies that such a polynomial is nonnegative on a box if and only if it is nonnegative at every vertex of the box. This consequence enables us to characterize the set of points on $G$ that lie on a facet of the convex hull of $G$. The sets, by derivation, turn out to be different unions of $d $-dimensional ($0 \\le d\\le n-1$) faces of $\\conv{G}.$\n\n\nThe questions of optimization and separation are also answered for $\\conv{G}$ for any SMP without using the quadratic-sized extension. A linear function can be optimized over $\\conv{G}$ in $O(n\\log{n})$ time (assuming the value of function $m$ at a vertex of $X$ can be computed in $O(1)$ time) without using the extended formulation. Thus, a point can be separated from $\\conv{G}$ in polynomial time via the ellipsoid method. There is also a direct separation algorithm that runs in $O(nt + n\\log{n})$ time where $t$ is the number of core inequalities, and hence has polynomial time complexity when $t$ is bounded by a polynomial in input size. \n\n\n\\subsubsection{Related Results in Literature}\n\n\\akshay{Although we recognize that our convex hull descriptions for the two special families have been established before in literature, our necessary conditions for core facets of $\\conv{G}$ yield alternate proofs for them. In this context, the following results are known. A set function is supermodular if and only if a certain extension of it from the vertices of a box to the entire box is a concave function \\cite[Proposition 4.1]{lovasz1983submod}; see \\cite[Theorem 4]{iwata2008submodular} for a simple proof. The projection of this extension generates the concave envelope of a supermodular function, and this envelope is described in the $x$-space by so-called polymatroid inequalities; see \\cite[Theorem 1]{atamturk2008polymatroids}, \\cite[Lemma 4]{vondrak2010lecture}, \\cite[Theorem 3.3]{tawarmalani2013explicit} for some proofs of this well-known fact. These polymatroid inequalities can be derived from polarity and the results on optimizing over the polymatroid polyhedron \\cite{edmonds1970submodular}. There is one inequality for each $n$-permutation, allowing for repetitions, and this immediately relates to there being a single core facet of the envelope and all other facets being permutations of it.\nThe convex envelope of a supermodular function is not known in general and is NP-hard to separate over. For the symmetric structure that is considered in this paper, this envelope was first described by \\cite[Theorem 4.6]{tawarmalani2013explicit} using their approach of strategically computing an exponential number of subdivisions of $X$ and by using the extreme points of these subdivisions to calculate the facets.\nA similarity between this proof and ours is that they both rely on the Kuhn's triangulation \\cite{kuhn1960lemma} of a box. The convex hull of a monomial with $-\\ell = u$, was first established by the authors in \\citep[Theorem 4.1]{monomerr}. However, it was done so without using the symmetry in the monomial to recognize the core facets. \n}\n\n\nSince the original submission of this paper, convexification of permutation-invariant sets, such as the graph $G$, has been studied by \\cite{kim2019convexification} who give a general framework for obtaining an extended formulation with $O(n^{2})$ many variables and inequalities for the convex hulls of such sets. This framework is based on first convexifying a strategically-defined subset of the region of interest, and then obtaining the convex hull for all permutations of each point within the convexified set. In the realm of convex hulls for SMPs, \\cite{kim2019convexification} and this paper both utilize information relative to a specific simplex that generates the Kuhn's triangulation of a box, and this similarity is not entirely surprising because of symmetry in the problem.\n\n\n\\subsection{Organisation of the Paper}\nSection~2 presents an extended formulation for $\\conv{G}$, gives a polar description of this convex hull, introduces the concept of core inequalities from which all valid linear inequalities can be derived upto permutations the coefficients, and answers the questions of optimization and separation. Section~3 provides various conditions for a core valid equality to be a core facet. Section~4 gives the RLT theory for general multilinear polynomials and derives consequences of it on nonnegativity of the polynomial over a box. This RLT machinery enables us to characterize the set of points in the graph at which a core facet is satisfied exactly. Section~5 considers the case of $m(\\x)$ being a supermodular function and Section~6 considers the case of $m(\\x)$ being a monomial with the variable domain being a box that allows reflections across the origin. Finally, Section~7 provides a summary of the paper and highlights some outstanding open questions for future research. The Appendix gives alternate arguments for deriving basic properties of $\\conv{G}$ using the RLT, and also has illustrative examples for our main results.\n\n\n\\section{Preliminaries}\n\n\\akshay{The convex hull of $G$ is a full-dimensional set because $G$ is the surface of a nonlinear function taken over an $n$-dimensional box, and this set is a polytope whose vertices are in bijection to the vertices of the box $X$ since this is known for general multilinear functions \\citep{rikun1997convex,sherali1997convex}.} Throughout this paper, we will study inequalities of the form\n\\begin{equation}\n\\beta_0+\\sum_{j=1}^n\\beta_jx_j+\\beta^{\\prime}y \\geq 0. \\label{sum24}\n\\end{equation}\n\\akshay{Trivial facets of $\\conv{G}$ (also referred to as vertical facets in the literature) are the facets generated by valid $\\x[\\beta]$ with $\\beta^{\\prime}=0$. For $n\\ge 3$, trivial facets are precisely the bounds on the variables, and for $n=2$ there are no trivial facets \\citep[Theorem 2.4 and Remark 2.1]{bao2009multiterm}. Wlog and upto scaling, we can assume that every nontrivial valid inequality has $\\beta^{\\prime}=\\pm{1}$. Since $\\conv{G}$ is the intersection of the epigraph of the convex envelope of $m$ and hypograph of the concave envelope of $m$, a facet-defining inequality (facet) with $\\beta^{\\prime}=1$ (resp. $\\beta^{\\prime}=-1$) represents a nontrivial facet of the epigraph (resp. hypograph). Since $\\conv{G}$ is full-dimensional, for every nontrivial facet of $\\conv{G}$ there exists a unique nontrivial valid inequality. We will characterize nontrivial facets using the concept of core inequalities.\n\nCertain notation will be useful throughout our study. Our polyhedral analysis will rely on the simplex\n\\begin{equation}\n\\mathcal{S} \\equiv \\left\\{\\x\\in \\real^n: u \\geq x_1\\geq\\ldots\\geq x_n\\geq \\ell \\right\\}, \\label{simpleex}\n\\end{equation}\nwhose $n+1$ extreme points are\n\\begin{equation}\t\\label{def:xk}\n\\x^{k} \\equiv \\left(\\underbrace{u, \\ldots, u}_{k},\\, \\underbrace{\\ell,\\ldots,\\ell}_{n-k} \\right), \\quad k= 0,\\dots,n.\n\\end{equation}\n\\akshay{The simplex $\\mathcal{S}$ is the one that has been used in combinatorial geometry to yield the Kuhn's triangulation of a box \\cite{kuhn1960lemma}. It is also known to be useful for convexifying general submodular\/supermodular functions \\cite{lovasz1983submod,tawarmalani2013explicit} and we will see this also in \\textsection\\ref{sec:supermod}. The value of the multilinear function at each $\\x^{k}$ can be computed by substituting it into equation~\\eqref{functionm}. In the special case of $\\ell=0$ and $u=1$, this becomes the combinatorial formula $m(\\x^{k}) = c_{k} + \\sum_{j=2}^{k-1}\\binom{k}{j}\\,c_{j}$.} For notational convenience, we let ${\\cal L}_k(\\beta_0,\\x[\\beta])= \\beta_0+\\sum_{j=1}^n\\beta_jx^k_j$ be that value obtained by inserting extreme point $\\x^k$ of $\\mathcal{S}$ into the expression $\\beta_0+\\sum_{j=1}^n\\beta_jx_j$ of \\eqref{sum24}, where $x^k_j$ denotes entry $j$ of $\\x^k,$ so that \n\\begin{equation}\n{\\cal L}_k(\\beta_0,\\x[\\beta])=\\beta_0+u\\left(\\sum_{j=1}^k\\beta_j\\right)+\\ell \\left(\\sum_{j=k+1}^n\\beta_j\\right), \\qquad \\forall \\; k \\in \\{0, \\ldots, n\\}, \\label{anoto25}\n\\end{equation} \nwith $\\sum_{j=1}^0\\beta_j=0$ and $\\sum_{j=n+1}^n\\beta_j=0$ in ${\\cal L}_0(\\beta_0,\\x[\\beta])$ and ${\\cal L}_n(\\beta_0,\\x[\\beta]),$ respectively. \n\nWe begin by noting two implicit descriptions of $\\conv{G}$, one is an extended formulation that projects onto this convex hull and another is a polarity result that gives a characterization of all the facets. Then we introduce core inequalities as those inequalities having a nondecreasing order on the coefficients and permutations of which generate all the valid inequalities. Lastly, we give algorithms for optimizing and separating over $\\conv{G}$.\n}\n\n\n\\subsection{Implicit Descriptions of the Convex Hull}\n\n\\akshay{A straightforward application of disjunctive programming along with using the well-known result that the envelopes of a multilinear function are generated by the extreme points of the box gives us a $O(n^{2})$-sized extended formulation for $\\conv{G}$.\n\n\\begin{proposition}\t\\thlabel{extform}\nThe following polyhedron projects onto $\\conv{G}$,\n\\begin{align*}\n\\Big\\{(x,\\{w^{k}\\}_{k=0}^{n}, y , v, \\lambda) \\sep & x_{j} = \\sum_{k=0}^{n}w^{k}_{j}, \\; \\forall j\\in N, \\ y = \\sum_{k=0}^{n}v_{k}, \\ \\sum_{k=0}^{n}\\lambda_{k}=1, \\\\ \n& \\sum_{j=1}^{n}w^{k}_{j} = (ku + (n-k)\\ell)\\lambda_{k}, \\ v_{k} = \\mk\\lambda_{k}, \\ k=0,\\dots,n \\\\\n& \\ell \\lambda_{k} \\le w^{k}_{j} \\le u\\lambda_{k} \\; \\forall j,k, \\ v \\in \\real^{n+1}, \\ \\lambda \\in \\real^{n+1}_{+} \\Big\\}\n\\end{align*}\n\\end{proposition}\n\\begin{proof}\nIt is well-known \\cite{sherali1997convex,rikun1997convex} that envelopes of a multilinear function over a box have the vertex extendability property meaning that for any multilinear function $p(x)$ and box $B\\subset\\real^{n}$, the convex hull of $\\{(x,y)\\in B\\times\\real\\colon y = p(x) \\}$ is equal to the convex hull of $\\{(x,y)\\in \\vertex{B}\\times\\real\\colon y = p(x) \\}$. Therefore, we have $\\conv{G}$ being equal to $\\conv{\\{(x,y)\\in\\{\\ell,u\\}^{n}\\times\\real\\colon y = m(x)\\}}$. The vertex set $\\{\\ell,u\\}^{n}$ can be partitioned into $n+1$ sets with each set for $k\\in\\{0,\\dots,n\\}$ corresponding to points having $k$ entries of $u$, thereby giving us\n\\begin{equation}\t\\label{convunion}\n\\vertex{(\\conv{G})} \\eq \\bigcup_{k=0}^{n}\\bigcup_{\\sigma\\in\\Sigma_{n}} \\left\\{(\\sigma\\cdot\\x^{k},m(\\sigma\\cdot\\x^{k})) \\right\\}\n\\eq \\bigcup_{k=0}^{n} \\left\\{ Q_{k} \\times \\{\\mk\\} \\right\\},\n\\end{equation}\nwhere $\\Sigma_{n}$ is the symmetric group of $n$ elements, $\\sigma\\cdot v = (v_{\\sigma(1)}, v_{\\sigma(2)}, \\dots, v_{\\sigma(n)})$ for a vector $v\\in\\real^{n}$, $Q_{k} := \\cup_{\\sigma\\in\\Sigma_{n}}\\{\\sigma\\cdot\\x^{k}\\}$, and the second equality is due to symmetry of $m(x)$. Since $\\conv{G}$ is the convex hull of its vertices, we get $\\conv{G} = \\conv{\\left(\\bigcup_{k=0}^{n} \\left\\{ \\conv{Q_{k}} \\times \\{\\mk\\} \\right\\}\\right)}$. Since $Q_{k}$ is the set of all points that have $k$ entries of $u$ and $n-k$ entries of $\\ell$, we have $\\conv{Q_{k}} = \\{x\\in[\\ell,u]^{n}\\colon \\sum_{j=1}^{n}x_{j} = ku + (n-k)\\ell \\}$. Set $P_{k} := \\conv{Q_{k}}\\times\\{\\mk\\} = \\{(x,y)\\in[\\ell,u]^{n}\\times\\real\\colon \\sum_{j=1}^{n}x_{j} = ku + (n-k)\\ell, \\, y = \\mk\\}$, which is a polyhedron. Since $\\conv{G} = \\conv(\\cup_{k=0}^{n}P_{k})$, applying disjunctive programming \\cite{balas1979disjunctive} yields the desired extended formulation.\n\\end{proof}\n\nAnother way of implicitly describing the convex hull is to characterise all its facets using the extreme points of the polar of the convex hull. This is known from \\cite[Theorem 2]{sherali1997convex} and \\cite[Theorem 2.4]{bao2009multiterm} for general multilinear functions over arbitrary boxes, and hence we state the below result for an SMP without proof.\n\n\\begin{proposition}\nAn inequality~\\eqref{sum24} with $\\beta' = 1$ (resp. $\\beta' = -1$) is a facet of $\\conv{G}$ if and only if $(\\beta_{0},\\beta)$ is an extreme point of the polyhedron $E := \\{(\\beta_{0},\\beta)\\colon \\beta_{0}+\\beta^{\\top}\\bar{\\x} \\ge - m(\\bar{\\x}), \\ \\bar{x}\\in\\{\\ell,u\\}^{n} \\}$ (resp. $H := \\{(\\beta_{0},\\beta)\\colon \\beta_{0}+\\beta^{\\top}\\bar{\\x} \\ge m(\\bar{\\x}), \\ \\bar{x}\\in\\{\\ell,u\\}^{n} \\}$).\n\\end{proposition}\n\nThe challenge with using this polarity result is that it requires enumeration of extreme points of the set $E$ and $H$, which are exponentially many in general. The next section observes that it suffices to focus attention on only a subset of inequalities since all other inequalities are obtained via permutations.\n}\n\n\\subsection{Core Inequalities}\n\nWe define \\emph{core inequalities} as being those inequalities~\\eqref{sum24} that have $\\beta^{\\prime}=\\pm{1}$ and $\\beta_1 \\leq \\ldots \\leq \\beta_n.$ The restriction that $\\beta^{\\prime}=\\pm{1}$ for nonzero $\\beta^{\\prime}$ is nonrestrictive, as it follows from scaling. The restriction that $\\beta_1 \\leq \\ldots \\leq \\beta_n$ follows from the problem symmetry, as an inequality will be valid (a facet) for $\\conv{G}$ if and only if every symmetric copy obtained by permuting the entries in $\\x[\\beta]^{\\top}$ is also valid (a facet). A \\emph{valid core inequality} is a core inequality that is valid for $\\conv{G}.$ A valid core inequality that is also a facet for $\\conv{G}$ is a \\emph{core facet}. \n\nSymmetry implies that the validity of a core inequality \\eqref{sum24} can be checked in terms of only the $(n+1)$ extreme points of the simplex $\\mathcal{S}.$ \n\n\\begin{lemma}\t\\thlabel{foundation}\nA core inequality \\eqref{sum24} is a valid core inequality if and only if \n\\begin{equation}\n{\\cal L}_k(\\beta_0,\\x[\\beta]) + \\beta^{\\prime} m(\\x^k) \\geq 0, \\qquad \\forall \\; k \\in \\{0, \\ldots, n\\}. \\label{anoto}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe only if direction is trivial and so we consider the if direction. It is sufficient to show that the $(n+1)$ inequalities of \\eqref{anoto} imply that \\eqref{sum24} is nonnegative for all $2^n$ extreme points of $\\conv{G}.$ Toward this end, consider any $k \\in\\{0, \\ldots, n\\},$ and note that each of the $n \\choose k$ extreme points $(\\x,y)$ of $\\conv{G}$ with $\\x$ having $k$ entries of value $u$ and $(n-k)$ entries of value $\\ell$ also has $y = m(\\x^k).$ In addition, each such extreme point yields a value for $\\left(\\beta_0+\\sum_{j=1}^n\\beta_jx_j\\right)$ that is at least as large as ${\\cal L}_k(\\beta_0,\\x[\\beta]),$ so that \\eqref{sum24} is nonnegative for all $n \\choose k$ such extreme points. As the result holds true for every such $k,$ it holds true for all the extreme points of $\\conv{G}.$ This completes the proof.\n\\end{proof}\n\n\\subsection{Optimization and Separation}\n\n\\akshay{\nThe question of optimizing a linear function over $\\conv{G}$ can be solved via a sorting algorithm without using the quadratic-sized extended formulation of \\mythref{extform}.\n\n\\begin{proposition}\t\\thlabel{optcompl}\nFor any $(\\alpha,\\alpha')\\in\\real^{n}\\times\\real$,\n\\[\n\\max\\left\\{\\alpha^{\\top}x + \\alpha^{\\prime}y \\sep (x,y)\\in\\conv{G} \\right\\} \\eq \\max_{k=0,\\dots,n}\\left\\{u\\sum_{j=1}^{k}\\alpha_{\\sigma(j)} + \\ell\\sum_{j=k+1}^{n}\\alpha_{\\sigma(j)} + \\alpha^{\\prime}\\mk \\right\\},\n\\]\nwhere the permutation $\\sigma$ is such that $\\alpha_{\\sigma(1)}\\ge \\alpha_{\\sigma(2)} \\ge \\cdots \\ge \\alpha_{\\sigma(n)}$.\n\\end{proposition}\n\\begin{proof}\nThe linear function has an optimum at a vertex of $\\conv{G}$, and so the left-hand side is equivalent to maximising $\\alpha^{\\top}x + \\alpha^{\\prime}y$ over $\\vertex{(\\conv{G})}$. Our claim follows after using $\\vertex{(\\conv{G})} = \\cup_{k=0}^{n}\\{Q_{k}\\times\\{\\mk\\} \\}$ from \\eqref{convunion} and observing that $\\max\\{\\alpha^{\\top}x \\colon x\\in Q_{k}\\} = u\\sum_{j=1}^{k}\\alpha_{\\sigma(j)} + \\ell\\sum_{j=k+1}^{n}\\alpha_{\\sigma(j)}$.\n\\end{proof}\n\nHence, assuming computation of each $\\mk$ takes $O(1)$ time, linear optimization over $\\conv{G}$ can be solved in $O(n\\log{n})$ time by sorting the vector $\\alpha$.\n\nDue to the well-known equivalence of complexity of optimization and separation over a polyhedron, it follows that a given point can be separated from the convex hull of $G$ in polynomial time. However, this connection invokes the ellipsoid algorithm whose complexity is a high degree polynomial in the input encoding. The question of separation can be answered directly if there are polynomially many core facets. The approach is similar to that used for \\mythref{optcompl} but here the point $(\\bar{x},\\bar{y})$ to be separated has $\\bar{x}$ sorted in nonincreasing order. \n\n\\begin{proposition}\t\\thlabel{sepcompl}\nIf $\\conv{G}$ has $t$ core facets, then a point can be separated from $\\conv{G}$ in $O(nt+n\\log{n})$ time.\n\\end{proposition}\n\\begin{proof}\nLet $(\\bar{x},\\bar{y})\\in X\\times\\real$ be a given point. Take $\\sigma\\in\\Sigma_{n}$ such that $\\bar{x}_{\\sigma(1)} \\le \\bar{x}_{\\sigma(2)} \\le \\cdots \\le \\bar{x}_{\\sigma(n)}$. Choose any core facet~\\eqref{sum24}. We have $\\beta_{1}\\le \\cdots\\le\\beta_{n}$ and need to check if this facet or any permuted facet is violated, i.e., whether there exists a permutation $\\sigma\\in\\Sigma_{n}$ such that $\\beta_0+\\sum_{j=1}^n\\beta_{\\sigma(j)} \\bar{x}_j+\\beta^{\\prime}\\bar{y} < 0$. This is equivalent to checking whether $- \\beta_{0} - \\beta^{\\prime}\\bar{y} > \\min\\{\\bar{x}^{\\top}v\\colon v\\in P_{n}(\\beta) \\}$, where $P_{n}(\\beta) = \\conv{\\{(\\beta_{\\sigma(1)},\\beta_{\\sigma(2)},\\dots,\\beta_{\\sigma(n)}) \\colon \\sigma\\in\\Sigma_{n} \\}}$ is the generalized permutahedron \\cite{bowman1972permutation} with respect to $\\beta$. Linear optimization over the permutahedron can be done via the sorting algorithm since the permutahedron is the base polymatroid polyhedron corresponding to a certain submodular function \\cite{rado52ineq} and \\citet{edmonds1970submodular} established that the sorting algorithm works for optimizing over any base polymatroid. In particular, we have that our minimum is attained at $v^{*} = \\sigma^{-1}\\cdot\\hat{\\beta}$, where $\\hat{\\beta} := (\\beta_{n},\\beta_{n-1},\\dots,\\beta_{1})$ is the reverse sorting of $\\beta$, so that $v^{*}_{j} = \\beta_{n+1-\\sigma^{-1}(j)}$ for all $j\\in N$. Thus, checking separation of a single core facet and its permuted copies is equivalent to checking whether $-\\beta_{0} - \\beta^{\\prime}\\bar{y} > \\sum_{j=1}^{n}\\beta_{n+1-\\sigma^{-1}(j)}\\bar{x}_{j}$, which runs in $O(n)$ time. The overall complexity then becomes $O(nt+n\\log{n})$ due to the sorting of $\\bar{x}$ initially and enumeration over $t$ core facets.\n\\end{proof}\n\nAs seen in the above proof, the overall complexity is composed of $nt$ additions and multiplications and a sorting step which requires $O(n\\log{n})$ comparisons.\n}\n\n\\section{Conditions for Core Facets}\n\nA challenge in determining whether a valid core inequality is a core facet is the identification of a maximum number of affinely independent points within $\\conv{G}$ that satisfies the inequality exactly. Of course, we can restrict attention to only those points that are extreme to $\\conv{G}.$ The following proposition shows that the set of extreme points of $\\conv{G}$ which satisfy a core inequality exactly can be completely characterized in terms of the extreme points of the simplex $\\mathcal{S}$.\n\n\\begin{lemma}\t\\thlabel{foundation45}\nGiven an extreme point $(\\tilde{\\x},m(\\tilde{\\x}))$ of $\\conv{G}$ with $\\tilde{\\x}$ not an extreme point of $\\mathcal{S}$, let $r$ be the smallest index $j$ such that $x_j =\\ell,$ and $s$ be the largest index $j$ such that $x_j =u.$ Further let $p$ be the number of entries of $\\tilde{\\x}$ having value $u$ (so that $r \\leq p \\leq s-1$). Then $(\\x,y)=(\\tilde{\\x},m(\\tilde{\\x}))$ satisfies a valid core inequality \\eqref{sum24} exactly if and only if $(\\x,y)=(\\x^p,m(\\x^p))$ satisfies the inequality exactly, and $\\beta_r = \\ldots = \\beta_s.$\n\\end{lemma}\n\\begin{proof}\nWe have $\\beta_0+\\sum_{j=1}^n\\beta_j\\tilde{x}_j+\\beta^{\\prime}m(\\tilde{\\x}) \\geq {\\cal L}_p(\\beta_0,\\x[\\beta])+\\beta^{\\prime}m(\\x^p) \\geq 0$, where the first inequality is due to the nondecreasing values of $\\beta_j$ and the symmetry of $m(\\x),$ and the second inequality is due to the validity of \\eqref{sum24}. The first inequality is satisfied exactly if and only if $\\beta_r = \\ldots = \\beta_s,$ while the second inequality is satisfied exactly if and only if \\eqref{sum24} is satisfied exactly at $(\\x^p,m(\\x^p)).$\n\\end{proof}\n\nA consequence is that given a valid core inequality, we can identify the subset of $2^n$ extreme points of $\\conv{G}$ that satisfy the inequality exactly by considering only the $(n+1)$ extreme points of $\\mathcal{S}.$ The next few results build upon this consequence to provide characteristics of core facets in terms of the extreme points of $\\mathcal{S}.$\n\n\\begin{proposition}\t\\thlabel{foundation22}\nA valid core inequality \\eqref{sum24} is a core facet only if it is satisfied exactly at $(\\x^k,m(\\x^k))$ for at least two extreme points $\\x^k$ of the simplex $\\mathcal{S},$ with at least one extreme point not being $\\x^0$ or $\\x^n.$\n\\end{proposition}\n\\begin{proof}\nSuppose that a valid core inequality \\eqref{sum24} is satisfied exactly at fewer than two such points $(\\x^k,m(\\x^k)),$ or at only the points $(\\x^0,m(\\x^0))$ and $(\\x^n,m(\\x^n)).$ Since the dimension of $\\conv{G}$ is $n+1,$ it is sufficient to show that the inequality is satisfied exactly at no more than $n$ affinely independent extreme points of $\\conv{G}.$ Three cases arise. First, if the inequality is satisfied exactly at no such $(\\x^k,m(\\x^k)),$ then \\eqref{sum24} is not satisfied exactly at any of the $2^n$ extreme points of $\\conv{G}$ by \\mythref{foundation45}. Second, if the inequality is satisfied exactly at precisely one such $(\\x^{\\tilde{k}},m(\\x^{\\tilde{k}}))$ then, by \\mythref{foundation45}, the only extreme points $(\\x,m(\\x))$ of $\\conv{G}$ that can possibly satisfy \\eqref{sum24} exactly are the $n \\choose \\tilde{k}$ points with $\\x$ having $\\tilde{k}$ entries of value $u$ and $(n-\\tilde{k})$ entries of value $\\ell.$ However, as the two linearly independent hyperplanes $\\sum_{j=1}^nx_j=\\tilde{k}u+(n-\\tilde{k})\\ell$ and $y=m(\\x^{\\tilde{k}})$ both pass through these $n \\choose \\tilde{k}$ points, there exist at most $n$ such affinely independent points. Finally, if the inequality is satisfied exactly at only $(\\x^0,m(\\x^0))$ and $(\\x^n,m(\\x^n)),$ then \\mythref{foundation45} gives us that these are the only extreme points of $\\conv{G}$ that satisfy \\eqref{sum24} exactly.\n\\end{proof}\n\nThe necessary condition in \\mythref{foundation22} to be a core facet is also sufficient when all the coefficients are equal.\n\n\\begin{proposition}\t\\thlabel{equalcoeff}\nA valid core inequality \\eqref{sum24} with $\\beta_1 = \\ldots = \\beta_n$ is a core facet if and only if it is satisfied exactly at $(\\x^k,m(\\x^k))$ for at least two extreme points $\\x^k$ of the simplex $\\mathcal{S},$ with at least one extreme point not being $\\x^0$ or $\\x^n.$\n\\end{proposition}\n\\begin{proof}\nThe only if direction follows directly from \\mythref{foundation22}, and so we consider the if direction. Suppose the inequality is satisfied exactly at two such points $(\\x^p,m(\\x^p))$ and $(\\x^q,m(\\x^q)),$ with $\\x^p$ not being $\\x^0$ or $\\x^n.$ Then \\mythref{foundation45} gives us that the $n \\choose p$ extreme points $(\\x,m(\\x))$ of $\\conv{G}$ with $\\x$ having $p$ entries of value $u$ and $(n-p)$ entries of value $\\ell$ satisfy the inequality exactly. The proof is to show that there exist $n$ affinely independent points from amongst this set of $n \\choose p$ points. Then these $n$ points, together with $(\\x^q,m(\\x^q)),$ will form an affinely independent set of $(n+1)$ points because each of the first $n$ points satisfies the equation $\\sum_{j=1}^nx_j=pu+(n-p)\\ell,$ but $(\\x^q,m(\\x^q))$ does not. In fact, it is sufficient to show that the extreme points of $X$ associated with these $n \\choose p$ points are affinely independent.\n\nSince the affine independence of a collection of points is unaffected when the same value is subtracted from every entry of each point, and when each point is multiplied by a nonzero scalar, the affine independence of the associated $n \\choose p$ extreme points of $X$ remains unchanged when every $u$ is replaced with 1 and every $\\ell$ is replaced with $0.$ Consider the $n \\times {{n} \\choose {p}}$ matrix $A$ defined so that each column corresponds to one such point, upon application of these operations. If $p=1,$ then $A$ is a permutation matrix, and the $n \\choose p$ points are affinely independent. Otherwise, $A$ has the two properties that: each row contains $\\kappa_1 \\equiv {{n-1} \\choose {p-1}}$ entries of value 1, and every pair of two distinct rows contains $\\kappa_2 \\equiv {{n-2} \\choose {p-2}}$ common entries of value $1.$ Hence, $AA^T$ is the $n \\times n$ matrix having $\\kappa_1$ along the main diagonal and $\\kappa_2$ elsewhere. Since $\\mbox{rank}(AA^T)=n,$ because the common row sum allows us to subtract $\\kappa_2$ from every entry of $AA^T$ to obtain a lower-triangular matrix with $(\\kappa_1-\\kappa_2) \\neq 0$ along the diagonal, then $\\mbox{rank}(A)=n,$ and the proof is complete.\n\\end{proof}\n\nThe below proposition gives further conditions, in terms of the extreme points $\\x^k$ of $\\mathcal{S},$ for a valid core inequality to be a core facet.\n\n\\begin{proposition}\t\\thlabel{foundation24}\nGiven any two valid core inequalities of the form \n\\begin{equation}\n\\bar{\\beta}_0+\\sum_{j=1}^n\\bar{\\beta}_jx_j+\\beta^{\\prime}y \\geq 0 \\mbox{ and } \\hat{\\beta}_0+\\sum_{j=1}^n\\hat{\\beta}_jx_j+\\beta^{\\prime}y \\geq 0, \\label{firsti}\n\\end{equation}\nif \n\\begin{equation}\n{\\cal L}_k(\\bar{\\beta}_0,\\bar{\\x[\\beta]}) \\leq {\\cal L}_k(\\hat{\\beta}_0,\\hat{\\x[\\beta]}) \\; \\forall \\; k \\in \\{0, \\ldots, n\\}, \\label{secondi} \n\\end{equation}\nwith strict inequality holding for at least one $k$ in \\eqref{secondi}, then the right inequality of \\eqref{firsti} is not a facet.\n\\end{proposition}\n\\begin{proof}\nIt is sufficient to show that every extreme point $(\\x,m(\\x))$ of $\\conv{G}$ that satisfies the right inequality of \\eqref{firsti} exactly also satisfies the left inequality of \\eqref{firsti} exactly. This statement holds for the $(n+1)$ extreme points $(\\x^k,m(\\x^k)), k \\in \\{0, \\ldots, n\\},$ by \\eqref{secondi}, and\nso we arbitrarily select any one of the remaining $2^n-(n+1)$ extreme points, say $(\\tilde{\\x},m(\\tilde{\\x})),$ and suppose that the right inequality holds exactly at this point. Some $p \\in \\{1, \\ldots, n-1\\}$ entries of $\\tilde{\\x}$ have value $u$ and $(n-p)$ entries have value $\\ell,$ with the first $p$ entries not all equal to $u.$ \nThe proof reduces to showing that \n\\begin{equation}\n0={\\cal L}_p(\\hat{\\beta}_0,\\hat{\\x[\\beta]})+\\beta^{\\prime}m(\\x^p)={\\cal L}_p(\\bar{\\beta}_0,\\bar{\\x[\\beta]})+\\beta^{\\prime}m(\\x^p)=\\bar{\\beta}_0+\\sum_{j=1}^n\\bar{\\beta}_j\\tilde{x}_j+\\beta^{\\prime}m(\\tilde{\\x}).\n\\label{summary}\n\\end{equation}\nEach equality of \\eqref{summary} is considered separately.\n\\begin{itemize}\n\\item Since, by assumption, $(\\tilde{\\x},m(\\tilde{\\x}))$ satisfies the right inequality of \\eqref{firsti} exactly, \\mythref{foundation45} gives us that the first equality of \\eqref{summary} holds true, and also that $\\hat{\\beta}^*=\\hat{\\beta}_r = \\ldots =\\hat{\\beta}_s$ for some scalar $\\hat{\\beta}^*,$ where $r$ and $s$ are, respectively, the indices of the first and last entries of $\\tilde{\\x}$ which differ from $\\x^p.$\n\\item The second equality of \\eqref{summary} holds true by \\eqref{secondi} with $k=p,$ the first equality of \\eqref{summary}, and the validity of the left inequality of \\eqref{firsti} at $(\\x^p,m(\\x^p)).$ \n\\item\n Since as noted above, $\\hat{\\beta}^*=\\hat{\\beta}_r = \\ldots =\\hat{\\beta}_s$ for some scalar $\\hat{\\beta}^*,$ we have that\n\\begin{equation}\n{\\cal L}_p(\\hat{\\beta}_0,\\hat{\\x[\\beta]})+\\hat{\\beta}^*(u-\\ell)(s-p)={\\cal L}_s(\\hat{\\beta}_0,\\hat{\\x[\\beta]}) \\geq {\\cal L}_s(\\bar{\\beta}_0,\\bar{\\x[\\beta]})={\\cal L}_p(\\bar{\\beta}_0,\\bar{\\x[\\beta]})+(u-\\ell)(\\sum_{j=p+1}^s\\bar{\\beta}_j) \\label{balance3}\n\\end{equation}\nand\n\\begin{equation}\n{\\cal L}_p(\\hat{\\beta}_0,\\hat{\\x[\\beta]})-\\hat{\\beta}^*(u-\\ell)(p-r+1)={\\cal L}_{r-1}(\\hat{\\beta}_0,\\hat{\\x[\\beta]}) \\geq {\\cal L}_{r-1}(\\bar{\\beta}_0,\\bar{\\x[\\beta]})={\\cal L}_p(\\bar{\\beta}_0,\\bar{\\x[\\beta]})-(u-\\ell)(\\sum_{j=r}^p\\bar{\\beta}_j), \\label{balance4}\n\\end{equation}\nwhere, within \\eqref{balance3} and \\eqref{balance4}, the equalities follow from \\eqref{anoto25}, and the inequalities are due to \\eqref{secondi}. Combine these expressions and invoke ${\\cal L}_p(\\hat{\\beta}_0,\\hat{\\x[\\beta]})={\\cal L}_p(\\bar{\\beta}_0,\\bar{\\x[\\beta]})$ from the second equality of \\eqref{summary} to obtain \n\\begin{equation}\n\\hat{\\beta}^* \\leq \\left(\\frac{1}{p-r+1}\\right)\\left(\\sum_{j=r}^p\\bar{\\beta}_j\\right) \\leq \\left(\\frac{1}{s-p}\\right)\\left(\\sum_{j=p+1}^s\\bar{\\beta}_j\\right) \\leq \\hat{\\beta}^*, \\nonumber\n\\end{equation}\nwhere the three inequalities follow from \\eqref{balance4}, the nondecreasing property of $\\bar{\\x[\\beta]},$ and \\eqref{balance3}, respectively. Again by the nondecreasing property of $\\bar{\\x[\\beta]},$ we have that $\\hat{\\beta}^*=\\bar{\\beta}_r = \\ldots =\\bar{\\beta}_s,$ giving the third equality of \\eqref{summary} by \\mythref{foundation45}.\\qedhere\n\\end{itemize}\n\\end{proof}\n\nThis leads us to the following necessary and sufficient condition for a valid inequality with distinct coefficients to be a core facet.\n\n\\begin{corollary}\\label{sfirst}\nA valid inequality $\\bar{\\beta}_0+\\sum_{j=1}^n\\bar{\\beta}_jx_j+\\beta^{\\prime}y \\geq 0$ with $\\bar{\\beta}_1 < \\ldots < \\bar{\\beta}_n$ is a core facet if and only if it is satisfied exactly at $(\\x^k,m(\\x^k))$ for $k \\in\\{0, \\ldots, n\\}$. In this case, no other core facet can exist with the given $\\beta^{\\prime}.$\n\\end{corollary}\n\\begin{proof}\nThe if direction follows from the $n+1$ points $(\\x^k,m(\\x^k))$ for $k \\in\\{0, \\ldots, n\\}$ being affinely independent, and so we consider the only if direction. \\mythref{foundation45} gives us that the only extreme points to $\\conv{G}$ that can possibly satisfy the given inequality exactly are $(\\x^k,m(\\x^k))$ for $k \\in\\{0, \\ldots, n\\}.$ Since the inequality is a facet, these $(n+1)$ affinely independent points must satisfy the inequality exactly, giving ${\\cal L}_k(\\bar{\\beta}_0,\\bar{\\x[\\beta]})+\\beta^{\\prime}m(\\x^k)=0$ for $k \\in\\{0, \\ldots, n\\}.$ No other valid inequality $\\hat{\\beta}_0+\\sum_{j=1}^n\\hat{\\beta}_jx_j+\\beta^{\\prime}y \\geq 0$ for $\\conv{G}$ with the given $\\beta^{\\prime}$ can then be a facet by \\mythref{foundation24}.\n\\end{proof}\n\nAnother set of necessary conditions for being a core facet is obtained below.\n\n\\begin{proposition}\t\\thlabel{equalcoeff7}\nGiven any two valid core inequalities of the form \\eqref{firsti}, suppose that the following conditions hold.\n\\begin{enumerate}\n\\item $\\bar{\\beta}_u=\\bar{\\beta}_v$ for each $(u,v),u0$ so that the inequality\n\\begin{equation}\n(\\beta_0-pu\\epsilon)+\\sum_{j=1}^{p}(\\beta_j+\\epsilon)x_j+\\sum_{j=p+1}^n\\beta_jx_j+\\beta^{\\prime}y \\geq 0, \\label{lastly}\n\\end{equation}\nwith $(\\beta_p+\\epsilon) \\leq \\beta_{p+1},$ is valid for $\\conv{G}.$ Then every extreme point $(\\x,m(\\x))$ to $\\conv{G}$ with $x_j=u$ for $j \\in \\{1, \\ldots, p\\}$ will have the left side of \\eqref{lastly} equal to the left side of \\eqref{sum24}, while every extreme point $(\\x,m(\\x))$ to $\\conv{G}$ with $x_j=\\ell$ for at least one $j \\in \\{1, \\ldots, p\\}$ will have the left side of \\eqref{lastly} strictly less than the left side of \\eqref{sum24}. Relative to the extreme points of the simplex $\\mathcal{S}$, the left sides of \\eqref{lastly} and \\eqref{sum24} will both take value ${\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)$ at $(\\x^k,m(\\x^k))$ for each $k \\in \\{p, \\ldots, n\\},$ but the left side of \\eqref{lastly} will be ${\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)-(p-k)(u-\\ell)\\epsilon$ at $(\\x^k,m(\\x^k))$ for each $\\x^k, k \\in \\{0, \\ldots, p-1\\},$ which is $(p-k)(u-\\ell)\\epsilon>0$ less than the left side of \\eqref{sum24}. Define $\\epsilon=\\min\\{\\epsilon^{\\prime},(\\beta_{p+1}-\\beta_{p})\\},$ where $\\epsilon^{\\prime}=\\mbox{min}_{k \\in \\{0, \\ldots, p-1\\}}\\left\\{\\frac{{\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)}{(p-k)(u-\\ell)}\\right\\}.$ Then $\\epsilon >0$ and inequality \\eqref{lastly} is valid for $\\conv{G}$ by \\mythref{foundation}, as it is valid at all $(\\x^k,m(\\x^k)),k \\in \\{0, \\ldots, n\\}.$ \n\\item If $s=n,$ the second conclusion follows trivially. Otherwise $s \\leq (n-1)$ and, by contradiction, define $p \\geq s$ so that $\\beta_{s} = \\beta_p < \\beta_{p+1}.$ It is sufficient to show that there exists an $\\epsilon>0$ so that the inequality\n\\begin{equation}\n\\beta_0+(n-p)\\ell \\epsilon+\\sum_{j=1}^{p}\\beta_jx_j+\\sum_{j=p+1}^n(\\beta_j-\\epsilon)x_j+\\beta^{\\prime}y \\geq 0, \\label{lastly2}\n\\end{equation}\nwith $\\beta_p \\leq (\\beta_{p+1}-\\epsilon),$ is valid for $\\conv{G}.$ Then every extreme point $(\\x,m(\\x))$ to $\\conv{G}$ with $x_j=\\ell$ for $j \\in \\{p+1, \\ldots, n\\}$ will have the left side of \\eqref{lastly2} equal to the left side of \\eqref{sum24}, while every extreme point $(\\x,m(\\x))$ to $\\conv{G}$ with $x_j=u$ for at least one $j \\in \\{p+1, \\ldots, n\\}$ will have the left side of \\eqref{lastly2} strictly less than the left side of \\eqref{sum24}. Relative to the extreme points of the simplex $\\mathcal{S}$, the left sides of \\eqref{lastly2} and \\eqref{sum24} will both take value ${\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)$ at $(\\x^k,m(\\x^k))$ for each $k \\in \\{1, \\ldots, p\\},$ but the left side of \\eqref{lastly2} will be ${\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)-(k-p)(u-\\ell)\\epsilon$ at $(\\x^k,m(\\x^k))$ for each $\\x^k, k \\in \\{p+1, \\ldots, n\\},$ which is $(k-p)(u-\\ell)\\epsilon>0$ less than the left side of \\eqref{sum24}. Define $\\epsilon=\\min\\{\\epsilon^{\\prime},(\\beta_{p+1}-\\beta_{p})\\},$ where $\\epsilon^{\\prime}=\\mbox{min}_{k \\in \\{p+1, \\ldots, n\\}}\\left\\{\\frac{{\\cal L}_k(\\beta_0,\\x[\\beta]) +\\beta^{\\prime} m(\\x^k)}{(k-p)(u-\\ell)}\\right\\}.$ Then $\\epsilon >0$ and inequality \\eqref{lastly2} is valid for $\\conv{G}$ by \\mythref{foundation}, as it is valid at all $(\\x^k,m(\\x^k)),k \\in \\{0, \\ldots, n\\}.$ \\qedhere\n\\end{itemize}\n\\end{proof}\n\n\\section{RLT for Multilinear Functions}\nIn this section, we present the Reformulation-Linearization Technique (RLT) as it pertains to the set $G$. We provide a brief description of select aspects of the general RLT process that are relevant to this study, emphasizing some key properties. Then we review the mathematical details of the RLT in terms of Kronecker products of matrices. This RLT machinery enables us to characterize exactness of the core facets for $\\conv{G}$.\n\n\\subsection{Main Ideas}\nThe RLT is a general methodology for reformulating mixed-integer linear and polynomial programs for the purpose of obtaining tight linear programming relaxations. While there is a rich body of literature on the topic \\citep{sherali1994hierarchy,sherali1990hierarchy,sherali99rlt} we focus attention here on a box-constrained region of $n$ variables $x_j,$ where each $x_j$ is restricted to lie between variable bounds $L_j$ and $U_j.$ The RLT gives a hierarchy of successively tighter polyhedral relaxations, but we consider only the highest level $n$ which affords the convex hull representations.\n\nSpecifically, consider the set\n\\begin{equation}\nX^{\\prime} \\equiv \\left\\{\\x\\in \\real^n: L_j \\leq x_j \\leq U_j \\; \\forall \\; j \\in N\\right\\} \\label{Xprimedef}\n\\end{equation}\nhaving each $L_j 0.$ The two corollaries serve to simplify this analysis. Throughout, the inequalities $F_{K}(x) \\geq 0$ of \\eqref{RLTstep} are assumed to have $L_j=\\ell$ and $U_j=u$ for all $j \\in N$ within \\eqref{Fun} as in $X$ of \\eqref{Xdef}. \n\n\\begin{lemma}\\thlabel{fite000}\nGiven any $K \\subseteq N$ and any $\\tilde{\\x} \\in X$ of \\eqref{Xdef}, partition $N$ into $N_1,$ $N_2,$ and $N_3$ so that $N_1 \\equiv\\{j:\\tilde{x}_j=\\ell\\},$ $N_2 \\equiv\\{j:\\tilde{x}_j=u\\},$ and $N_3 \\equiv\\{j:\\ell<\\tilde{x}_j0 \\mbox{ has } F_{K}(x)=0 \\mbox{ at } \\bar{\\x}. \\label{newbeed}\n\\end{equation}\n$\\left(1 \\rightarrow 2\\right)$ Given $(\\x,y)=(\\tilde{\\x},m(\\tilde{\\x}))$ satisfies \\eqref{sum24} exactly, the ``only if\" direction of \\eqref{newbeed} with $\\bar{\\x}=\\tilde{\\x}$ gives us that every $\\pi_K>0$ has $F_{K}(x)=0$ at $\\tilde{\\x}.$ Implication $1 \\rightarrow 2$ of \\mythref{fite00} then gives us that every $\\pi_K>0$ has $F_{K}(x)=0$ at every $\\hat{\\x} \\in X$ having $\\hat{x}_j=\\ell \\; \\forall \\; j \\in N_1$ and $\\hat{x}_j=u\\; \\forall \\; j \\in N_2.$ Then the ``if\" direction of \\eqref{newbeed} with each such $\\hat{\\x}$ substituted for $\\bar{\\x}$ gives the result.\\\\\n$\\left(3 \\rightarrow 1\\right)$ Given $(\\x,y)=(\\hat{\\x},m(\\hat{\\x}))$ satisfies \\eqref{sum24} exactly at every extreme point $\\hat{\\x}$ of $X$ having $\\hat{x}_j=\\ell \\; \\forall \\; j \\in N_1$ and $\\hat{x}_j=u\\; \\forall \\; j \\in N_2,$ the ``only if\" direction of \\eqref{newbeed} with each such $\\hat{\\x}$ substituted for $\\bar{\\x}$ gives us that every $\\pi_K>0$ has $F_{K}(x)=0$ at every such $\\hat{\\x}.$ Implication $3 \\rightarrow 1$ of \\mythref{fite00} then gives us that every $\\pi_K>0$ has $F_{K}(x)=0$ at $\\tilde{\\x}.$ Then the ``if\" direction of \\eqref{newbeed} with $\\bar{\\x}=\\tilde{\\x}$ gives the result.\n\\end{proof}\n\nWe invoke \\mythref{foundation45} and \\mythref{fitt} to establish a theorem and two corollaries. The theorem gives, in terms of the extreme points of the simplex $\\mathcal{S}$ of \\eqref{simpleex}, a necessary and sufficient condition for a valid core inequality \\eqref{sum24} to be satisfied exactly at a point $(\\tilde{\\x},\\tilde{y})\\in G.$ This theorem is a generalization of \\mythref{foundation45} in that, by restricting $n=(p+b)$ within the theorem, the stated point $(\\tilde{\\x},\\tilde{y})\\in G$ must be the extreme point $(\\tilde{\\x},m(\\tilde{\\x}))\\in G$ of $\\conv{G}$ found within \\mythref{foundation45}. The corollaries are special cases of the theorem when the valid core inequality \\eqref{sum24}: is satisfied exactly at $(\\x^j,m(\\x^j))$ for all $j \\in \\{0, \\ldots, n\\},$ and has all coefficients $\\beta_j$ equal to the same scalar, say $\\bar{\\beta},$ respectively.\n\n\\begin{theorem}\\thlabel{exact1111}\nGiven a point $(\\tilde{\\x},\\tilde{y})\\in G$ with $\\tilde{\\x}$ not an extreme point of the simplex $\\mathcal{S}$ of \\eqref{simpleex}, let $r$ be the smallest index $j$ such that $\\tilde{x}_j \\ell.$ Further let $p$ be the number of entries of $\\tilde{\\x}$ having value $u$ and $b$ be the number of entries of $\\tilde{\\x}$ having value $\\ell.$ Then $(\\tilde{\\x},\\tilde{y})$ satisfies a valid core inequality \\eqref{sum24} exactly if and only if $(\\x,y)=(\\x^j,m(\\x^j))$ satisfies \\eqref{sum24} exactly for all $j \\in\\{p,\\ldots,n-b\\},$ and $\\beta_r=\\ldots=\\beta_s.$ \n\\end{theorem}\n\\begin{proof}\nIf $n=(p+b)$ so that all entries of $\\tilde{\\x}$ take either value $\\ell$ or $u,$ then $(\\tilde{\\x},\\tilde{y})\\in G$ is the extreme point $(\\tilde{\\x},m(\\tilde{\\x}))$ of $\\conv{G},$ so the result is \\mythref{foundation45}. Otherwise, $n \\geq (p+b+1),$ and we adopt the notation of \\mythref{fitt} that $N_1 \\equiv\\{j:\\tilde{x}_j=\\ell\\},$ $N_2 \\equiv\\{j:\\tilde{x}_j=u\\},$ and $N_3 \\equiv\\{j:\\ell<\\tilde{x}_j \\ell.$ Given a valid core inequality \\eqref{sum24} that is satisfied exactly at $(\\x^j,m(\\x^j))$ for all $j \\in \\{0, \\ldots, n\\},$ the point $(\\tilde{\\x},\\tilde{y})$ satisfies this inequality exactly if and only if $\\beta_r=\\ldots=\\beta_s.$ \n\\end{corollary}\n\n\\mythref{exact1111} also simplifies when the valid core inequality \\eqref{sum24} has, for some scalar $\\bar{\\beta},$ $\\beta_j=\\bar{\\beta}$ for all $j \\in N.$ In this case, the parameters $r$ and $s$ within the theorem are no longer needed, as they serve only to restrict a subset of the coefficients $\\beta_j$ to equal. The simplification is stated formally below.\n\n\\begin{corollary}\\thlabel{exactly1111}\nGiven a point $(\\tilde{\\x},\\tilde{y})\\in G$ with $\\tilde{\\x}$ not an extreme point of the simplex $\\mathcal{S}$ of \\eqref{simpleex}, let $p$ be the number of entries of $\\tilde{\\x}$ having value $u$ and $b$ be the number of entries of $\\tilde{\\x}$ having value $\\ell.$ Then $(\\tilde{\\x},\\tilde{y})$ satisfies a valid core inequality \\eqref{sum24} having $\\beta_j=\\bar{\\beta}$ for all $j \\in N$ exactly if and only if $(\\x,y)=(\\x^j,m(\\x^j))$ satisfies \\eqref{sum24} exactly for all $j \\in\\{p,\\ldots,n-b\\}.$ \n\\end{corollary}\n\n\\section{Supermodular Functions}\t\\label{sec:supermod}\n\nIn this subsection, we describe $\\conv{G}$ for SMPs $m(\\x)$ that are \\emph{supermodular} over the extreme points of $X$. Such a function $m(\\x)$ over these $2^n$ points can be expressed as a set function $f(A)$ defined over $A \\subseteq N$ so that $f(A)=m(\\x)$ when evaluated at that extreme point $\\x$ having $x_j=u$ for $j \\in A$ and $x_j=\\ell$ for $j \\notin A.$ Recall that a set function $f$ is defined to be supermodular over $N$ if and only if\n\\begin{equation}\nf(S \\cup \\{r\\})-f(S) \\leq f(S \\cup \\{r,t\\})-f(S \\cup \\{t\\}) \\; \\mbox{ for } r,t \\in N, r \\neq t, \\mbox{ and }S\\subseteq N \\backslash \\{r,t\\}, \\label{rest0777}\n\\end{equation}\nwe have that the function $m(\\x)$ is supermodular over the $2^n$ extreme points of $X$ if and only if \n\\begin{equation}\nm(\\x^{k})-m(\\x^{k-1}) \\leq m(\\x^{k+1})-m(\\x^{k}), \\quad \\forall \\; k \\in \\{1, \\ldots, n-1\\}. \\label{rest0}\n\\end{equation}\nThis equivalence follows by considering, for each $k \\in \\{1, \\ldots, n-1\\},$ all sets $S \\subseteq N$ within \\eqref{rest0777} having $|S|=(k-1),$ and all $\\{r,t\\} \\in N \\backslash S,$ and by invoking the symmetry of $m(\\x)$ to obtain that $f(T)=m(\\x^k)$ for all $T \\subseteq N$ with $|T|=k.$ The function $m(\\x)$ is \\emph{strictly supermodular} over the $2^n$ extreme points of $X$ if and only if the $(n-1)$ inequalities of \\eqref{rest0} are satisfied strictly. Henceforth, for brevity, we will refer to functions $m(\\x)$ that are (strictly) supermodular over the $2^n$ extreme points of $X$ as being (strictly) supermodular.\n\nThe below theorem explicitly states all core facets for supermodular $m(\\x).$\n\n\\begin{theorem} \\thlabel{result1}\nWhen $m(\\x)$ is supermodular, there exist at most $(n+1)$ core facets for $\\conv{G},$ and all such facets, subject to repetition, are \n\\begin{equation}\n\\frac{um(\\x^0)-\\ell m(\\x^n)}{u-\\ell}+\\sum_{j=1}^n\\left(\\frac{m(\\x^j)-m(\\x^{j-1})}{u-\\ell}\\right)x_j-y \\geq 0, \\label{res1}\n\\end{equation}\nand\n\\begin{equation}\n-m(\\x^k) - \\left(\\frac{m(\\x^{k})-m(\\x^{k-1})}{u-\\ell}\\right)\\left(\\sum_{j=1}^nx_j-[ku+(n-k)\\ell]\\right)+y \\geq 0, \\quad \\forall \\; k \\in N. \\label{res2}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe $(n+1)$ inequalities of \\eqref{res1} and \\eqref{res2} are valid for $\\conv{G}$ by \\mythref{foundation}, as they are readily verified to hold for all $(\\x^k,m(\\x^k)), k \\in\\{0, \\ldots, n\\}.$ Inequality~\\eqref{res1} is a facet because these same $(n+1)$ affinely independent points satisfy it exactly, and it is a core facet because the $\\beta_j$ are nondecreasing by \\eqref{rest0}. In addition, no other core facet can exist with $\\beta^{\\prime}=-1$ by \\mythref{foundation24} because the functions ${\\cal L}_k(\\beta_0,\\x[\\beta])$ from \\eqref{anoto25}, with $(\\beta_0,\\x[\\beta])$ defined in terms of \\eqref{res1}, have ${\\cal L}_k(\\beta_0,\\x[\\beta])=m(\\x^k)$ for all $k \\in \\{0, \\ldots, n\\}.$ Relative to \\eqref{res2}, for each $k \\in N,$ the corresponding inequality is a core facet by \\mythref{equalcoeff}, as it is satisfied exactly at the two points $(\\x^{k-1},m(\\x^{k-1}))$ and $(\\x^k,m(\\x^k)).$ To show that no other core facet can exist with $\\beta^{\\prime}=1$ and complete the proof, it is sufficient to show that every core facet $\\beta_0+\\sum_{j=1}^n\\beta_jx_j+y \\geq 0$ which is satisfied exactly at some $(\\x^{r},m(\\x^{r}))$ and $(\\x^{s},m(\\x^{s}))$ with $rr,$ the first equality follows from the facet holding exactly at $(\\x^r,m(\\x^r))$ and $(\\x^{s},m(\\x^{s})),$ the second equality is algebra, the second inequality follows from the nondecreasing values of $\\beta_j,$ and the final inequality follows from the feasibility of $(\\x^{r+1},m(\\x^{r+1}))$ to $\\conv{G}.$ Then $m(\\x^{r+1})=-\\beta_0-u\\left(\\sum_{j=1}^{r+1}\\beta_j\\right)-\\ell\\left(\\sum_{j=r+2}^{n}\\beta_j\\right)$ so that the core facet is satisfied exactly at $(\\x^{r+1},m(\\x^{r+1})).$ The proof is complete.\n\\end{proof}\n\n\\mythref{result1} states that there exist \\emph{at most} $(n+1)$ core facets because the inequalities of \\eqref{res2} can repeat. Repetition will occur whenever $\\left(m(\\x^{p})-m(\\x^{p-1})\\right)=\\left(m(\\x^{q})-m(\\x^{q-1})\\right)$ for distinct $p,q \\in N.$ Notably, if $m(\\x)$ is strictly supermodular, then repetitions will not occur so that \\eqref{res2} will contain $n$ distinct facets. \n\nSubject to permutations of $\\x[\\beta],$ inequalities \\eqref{res1} define the concave envelope of $m(\\x).$ \\akshay{These inequalities are a special case of the polymatroid inequalities that are known for general supermodular functions \\cite{lovasz1983submod,tawarmalani2013explicit}. Polymatroid inequalities are known to be separated easily in $O(n\\log{n})$ time using a sorting algorithm since \\citet{edmonds1970submodular} showed how to optimize a linear function over the polymatroid polyhedron corresponding to a submodular function.} Inequalities~\\eqref{res2} have equal coefficients on the variables and hence they do not need to be permuted and define the convex envelope of $m(\\x)$. \\akshay{Our proof for these inequalities is an alternative proof to that of \\citet[Theorem 4.6]{tawarmalani2013explicit}.} For a submodular function $m(\\x)$ (meaning that the inequality in \\eqref{rest0} is switched to $\\ge$), inequalities~\\eqref{res1}, subject to permutations of $\\x[\\beta],$ define the convex envelope, and inequalities~\\eqref{res2} define the concave envelope subject to the following modifications: all occurrences of $m(\\x^j),$ $j \\in \\{0, \\ldots, n\\},$ and $y$ are negated.\n\n\nNow we use \\mythref{exactly1110,exactly1111} to identify, for each of the $(n+1)$ core facets \\eqref{res1} and \\eqref{res2} of \\mythref{result1}, the set of all points $(\\x,y) \\in G$ that satisfies it exactly. A key ingredient of \\mythref{result1} that is useful in our upcoming proof is that every core facet $\\beta_0+\\sum_{j=1}^n\\beta_jx_j+y \\geq 0$ of the form \\eqref{res2} which is satisfied exactly at some $(\\x^{r},m(\\x^{r}))$ and $(\\x^{s},m(\\x^{s}))$ with $r \\ell\\}$.\n\\item $(\\x,y)$ satisfies \\eqref{res2} exactly if and only if $(\\x^{j},m(\\x^{j}))$ satisfies \\eqref{res2} exactly for $j=p,\\dots,n-q$, where $p=|\\{j\\colon x_{j}=u \\} |$ and $q=|\\{j\\colon x_{j}=\\ell \\} |$.\n\\item $(\\x,y)$ satisfies \\eqref{res2} exactly if and only if $|\\{j\\colon x_{j}=u \\}| \\ge v$ and $|\\{j\\colon x_{j}=\\ell \\}| \\ge n-w$, where $v$ and $w$ are the smallest and largest, respectively, values of $j$ such that the point $(\\x^{j},m(\\x^{j}))$ satisfies \\eqref{res2} exactly.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n(1) As noted in the proof of \\mythref{result1}, the points $(\\x^k,m(\\x^k))$ satisfy \\eqref{res1} exactly for all $k \\in \\{0, \\ldots, n\\}.$ Then the result trivially holds true when ${\\x}$ is an extreme point of $\\mathcal{S},$ and it holds true when ${\\x}$ is not an extreme point of $\\mathcal{S}$ by \\mythref{exactly1110}.\n\n(2) The result trivially holds true when ${\\x}$ is an extreme point of $\\mathcal{S},$ and it holds true when ${\\x}$ is not an extreme point of $\\mathcal{S}$ by \\mythref{exactly1111} with $\\bar{\\beta}=-\\left(\\frac{m(\\x^k)-m(\\x^{k-1}}{u-\\ell}\\right).$\n\n(3) This is a restatement of above using Remark 6 of \\mythref{result1} which states that the set of extreme points of $\\mathcal{S}$ satisfying the facet exactly must be consecutive.\n\\end{proof}\n\n\nMore refined characterizations for exactness hold when the function is strictly supermodular.\n\n\\begin{corollary} \\thlabel{exactstrictsupermod}\nLet $m(\\x)$ be strictly supermodular and $(\\x,y)\\in G$.\n\\begin{enumerate}\n\\item $(\\x,y)$ satisfies \\eqref{res1} exactly if and only if $\\x$ is a convex combination of two consecutive extreme points $\\x^j$ and $\\x^{j+1}$ of $\\mathcal{S}.$\n\\item $(\\x,y)$ satisfies \\eqref{res2} exactly if and only if $|\\{j\\colon x_{j}=u_{j} \\} | \\ge k-1$ and $| \\{j\\colon x_{j} = \\ell \\}| \\ge n-k$.\n\\end{enumerate} \n\\end{corollary}\n\\begin{proof}\n(1) Follows directly from \\mythref{newsuper100} since the coefficients $\\beta_j= \\left(\\frac{m(\\x^j)-m(\\x^{j-1})}{u-\\ell}\\right)$ for all $j \\in N$ of \\eqref{res1} are distinct, as every inequality within \\eqref{rest0} is satisfied strictly for strictly supermodular $m(\\x).$\n\n(2) Consider any $k \\in N.$ As noted in the proof of \\mythref{result1} and readily verified, $(\\x,y)=(\\x^{k-1},m(\\x^{k-1}))$ and $(\\x,y)=(\\x^k,m(\\x^k))$ each satisfy the core facet exactly. Thus, it is sufficient to show that supermodular $m(\\x)$ enforces $r=(k-1)$ and $s=k$ in \\mythref{newsuper100}. By contradiction, suppose that $s \\geq (r+2)$ so that each of $(\\x^{r},m(\\x^{r})),$ $(\\x^{r+1},m(\\x^{r+1})),$ and $(\\x^{r+2},m(\\x^{r+2}))$ satisfy the facet exactly. As in the proof of \\mythref{result1}, for each $p \\in \\{r+1,r+2\\},$ by inserting $(\\x^{p-1},m(\\x^{p-1}))$ and $(\\x^p,m(\\x^p))$ into the facet and subtracting the second expression from the first, we obtain\n\\begin{equation}\n \\frac{-\\left(m(\\x^{r+1})-m(\\x^r)\\right)}{u-\\ell}=\\beta_{r+1}=\\beta_{r+2}=\\frac{-\\left(m(\\x^{r+2})-m(\\x^{r+1})\\right)}{u-\\ell},\\nonumber\n\\end{equation}\nwhere $\\beta_{r+1}=\\beta_{r+2}$ is due to the form of \\eqref{res2}. The strict inequality of \\eqref{rest0} for supermodular $m(\\x)$ with $k=(r+1)$ yields the contradiction that $\\beta_{r+1} > \\beta_{r+2}.$\n\\end{proof}\n\nWe finish this section by remarking on supermodularity of an SMP, giving us some important families of functions $m(\\x)$ so that \\mythref{result1} characterizes their convex hull. First we provide an alternate characterization to~\\eqref{rest0} by expressing these requirements in terms of the coefficients $c_{j}$ of the function $m$.\n\n\\begin{proposition}\t\\thlabel{supermodcond}\n$m(\\x)$ is supermodular over $X$ if and only if $\\sum_{d=2}^{n}c_{d}(u-\\ell)^{2}\\vartheta_{k,d} \\ge 0$ for $k=1,\\dots,n-1$, where $\\vartheta_{2,2} = 1$ and\n\\[\n\\vartheta_{k,d} = \\sum_{\\substack{J \\subseteq N\\setminus\\{k,k+1\\} \\\\ |J|=d-2}}\\,\\prod_{j \\in J}x^{k-1}_j, \\qquad d = 3,\\dots,n,\\ k =1,\\dots,n-1,\n\\]\n\\end{proposition}\n\\begin{proof}\nFor each $k \\in \\{1, \\ldots, n-1\\},$ the difference $m(\\x^{k+1})-m(\\x^{k})-\\left(m(\\x^{k})-m(\\x^{k-1})\\right)$ is computable in terms of only those expressions $\\prod_{j \\in J}x_j$ within $m(\\x)$ that contain both $k \\in J$ and $\\{k+1\\} \\in J.$ For $d=2,$ this difference for the $n \\choose d$ expressions in $m(\\x)$ of degree $d$ is given by $c_2(u-\\ell)^2,$ while for each $d \\in \\{3, \\ldots, n\\},$ it is given by\n\\begin{equation} \nc_d(u-\\ell)^2\\left(\\sum_{\\substack{J \\subseteq N-\\{k,k+1\\} \\\\ |J|=d-2}}\\left(\\prod_{j \\in J}x^{k-1}_j\\right)\\right). \\label{newbee}\n\\end{equation}\nThen \\eqref{rest0} is equivalent to having the sum of these expressions from 2 to $n$ being nonnegative for all $k \\in \\{1, \\ldots, n-1\\}.$ \n\\end{proof}\n\n\\akshay{\nA multilinear monomial $x_{1}x_{2}\\cdots x_{n}$ is supermodular over the nonnegative orthant \\citep[cf.][]{tawarmalani2013explicit}. Since supermodularity is preserved under taking nonnegative combinations of functions, it follows that an SMP with $c_{j}\\ge 0$ for all $j$ is supermodular over $\\real^{n}_{+}$. This fact for nonnegative valued SMPs also follows from our characterization.}\n\n\\begin{corollary}\t\\thlabel{nonnegsupermod}\n$m(\\x)$ is supermodular over $X$ if $\\ell \\ge 0$ and $c_{j} \\ge 0$ for all $j=2,\\dots,n$.\n\\end{corollary}\n\\begin{proof}\n$\\ell\\ge 0$ implies that $\\x^{k}_{j} \\ge 0$ for all $j,k$, which implies $\\vartheta_{k,d}\\ge 0$ for all $k,d$. The summation in \\mythref{supermodcond} is nonnegative because $c_{j}\\ge 0$ for all $j$.\n\\end{proof}\n\nAnother consequence is that symmetric quadratic polynomials are always either submodular or supermodular, regardless of the box in $\\real^{n}$.\n\n\\begin{corollary}\nThe symmetric quadratic polynomial $\\tau\\sum_{i\\neq j}x_{i}x_{j}$ is either submodular or supermodular over $X$ for any $\\tau\\neq 0$.\n\\end{corollary}\n\\begin{proof}\nA symmetric quadratic polynomial is $m(\\x)$ with $c_{3} = \\cdots = c_{n} = 0$, and $c_{2}=\\tau$. Since all monomials are of the same degree 2, we can assume wlog that $\\ell\\ge 0$ because otherwise we can negate all the variables and consider the reflected box $X' = \\{x'\\sep -u_{j}\\le x'_{j} \\le -\\ell_{j} \\}$. Applying \\mythref{nonnegsupermod} to $m(\\x)$ if $\\tau > 0$ or to $-m(\\x)$ if $\\tau < 0$ yields the desired claim. \n\\end{proof}\n\nWhen considering the unit hypercube in $\\real^{n}$, supermodularity is attained through nonnegativity of a partial sum. Denote ${0 \\choose 0}=1$.\n\n\\begin{corollary}\n$m(\\x)$ is supermodular over $[0,1]^{n}$ if and only if\n\\begin{equation}\n\\sum_{d=2}^{k+1}{{k-1} \\choose {d-2}}c_d \\geq 0, \\quad \\forall \\; k \\in \\{1, \\ldots, n-1\\}. \\label{newbee2}\n\\end{equation} \n\\end{corollary}\n\\begin{proof}\nFollows immediately by simplifying the sum in~\\eqref{newbee}.\n\\end{proof}\n\n\n\nMore general families of SMPs are encompassed by \\eqref{newbee2} including, for example, those having $c_d \\geq 0$ for $d\\in\\{2, \\ldots, n-1\\}$ and $c_n \\geq -\\sum_{d=2}^{n-1}{{n-2} \\choose {d-2}}c_d,$ which reduces to $c_2 \\geq \\mbox{max}\\{0,(2-n)c_3\\}$ for cubic functions.\n\n\n\\section{Monomials with Reflection Symmetry\n\n\\akshay{We consider $m(\\x)$ to be a monomial $m(\\x)=c_n\\prod_{j=1}^n x_j$ over the box $X$. The coefficient $c_{n}$ can be taken to be 1 since we can scale $y$ to $y\/c_{n}$. Three cases arise depending on the location of the box in $\\real^{n}$: (1) $\\ell u = 0$, (2) $\\ell u > 0$, and (3) $\\ell u < 0$. After performing appropriate scalings, the first two cases are equivalent to that of $\\ell = 0, u =1$ and $\\ell = 1, u = r$ for some fixed $r > 1$, respectively. These two cases fall within the class of supermodular SMPs (cf.~\\mythref{nonnegsupermod}), and so \\mythref{result1} gives their convex hull. Indeed, it is easily verified that the resulting description of the convex hull matches the known envelopes for $\\prod_{j=1}^{n} x_{j}$ over $[0,1]^{n}$ \\cite{crama1993concave} and over $[1,r]^{n}$ for some fixed $r > 1$ (see \\cite[Proposition 4.1]{monomerr} which is a direct consequence of results from \\cite[Theorem 1]{benson2004concave} and \\cite[Theorem 4.6]{tawarmalani2013explicit}). The third case $\\ell u < 0$ is equivalent (upto scaling) to taking $\\ell = -1$ and $u = r$ for some $r > 0$ so that $X = [-1,r]^{n}$, and it is readily verified that a monomial is neither submodular nor supermodular over $[-1,r]^{n}$. Note that since we are considering monomials, scaling means that if each variable $x_{j}$ has different lower and upper bounds $\\ell_{j}$ and $u_{j}$, then as long as $u_{j} = -\\ell_{j}$ we can reduce to the case $X = [-1,r]^{n}$. The convex hull for the specific subcase having $r=1$ was first established by the authors in \\citep[Theorem 4.1]{monomerr}. However, it was done so without using the symmetry of the monomial and hence did not recognize the core facets. Our main goal in this section is to independently describe this convex hull by identifying the core facets and their properties. \n}\n\n\n\n\\begin{theorem}\t\\thlabel{resulting3}\nFor $m(\\x)=\\prod_{j=1}^nx_j$ and $-\\ell=u=1,$ there exist precisely $n+3$ core facets, and these are \n\\begin{align}\n&1-y \\geq 0 \\mbox{ and } 1+y \\geq 0 \\label{trivial} \\\\\n&(n-1)+\\sum_{j=1}^nx_j+(-1)^ny \\geq 0, \\label{pair1} \\\\\n&(n-1)-\\sum_{j=1}^nx_j+y \\geq 0. \\label{pair2}\\\\\n&(n-1)-\\left(\\sum_{j=1}^{t+1}x_j-\\sum_{j=t+2}^nx_j\\right) +(-1)^{n-(t+1)}y \\geq 0, \\quad \\forall \\; t \\in \\{0, \\ldots, n-2\\}. \\label{dominate2}\n\\end{align}\n\\end{theorem}\n\n\\akshay{The general case of convexifying $\\prod_{j=1}^{n}x_{j}$ over $[-1,r]^{n}$ for $r > 0, r \\neq 1$ remains an open question. Before giving our proof for the above theorem, let us note the complexity of separation and comment on the structure of the proposed inequalities which includes identifying exactness of the core facets. }\n\n\\begin{corollary}\nFor $m(\\x)=\\prod_{j=1}^nx_j$ and $-\\ell=u=1,$ a point can be separated from $\\conv{G}$ in $O(n\\log{n})$ time.\n\\end{corollary}\n\\begin{proof}\n\\mythref{resulting3} tells us that there are $t=n+3$ core facets. \\mythref{sepcompl} then implies that the complexity of separation is $O(n^{2} + n\\log{n})$. In the proof of this result, the complexity $O(nt)$ came from having to take a summation of $n$ terms while separating each of the $t$ core facets. Since the summation in both \\eqref{pair1} and \\eqref{pair2} is the same for any permutation of variables and therefore can be stored and reused, we get that the overall complexity is $O(t+n\\log{n}) = O(n\\log{n})$.\n\\end{proof}\n\n\n\\subsection{Structure of the inequalities}\n\nAll facets for $\\conv{G}$ are computable in terms of the core facets enumerated in \\mythref{resulting3} via permutations of $\\x[\\beta].$ Since each of the four core facets of \\eqref{trivial}, \\eqref{pair1}, and \\eqref{pair2} have all $\\beta_j$ equal, no permutations of $\\x[\\beta]$ will produce additional facets. However, for each $t \\in \\{0, \\ldots, n-2\\},$ the core facet \\eqref{dominate2} admits ${n} \\choose {t+1}$ facets. Then \\eqref{pair1}, \\eqref{pair2}, and \\eqref{dominate2} motivate the $2^n$ inequalities\n\\begin{equation}\n(n-1)-\\left(\\sum_{j \\in J}x_j-\\sum_{j \\in N\\setminus J}x_j\\right)+(-1)^{n-|J|}y \\geq 0, \\quad \\forall \\; J \\subseteq N, \\label{dominate3}\n\\end{equation}\nwhere \\eqref{dominate3} with $J = \\emptyset$ is \\eqref{pair1}, where \\eqref{dominate3} with $J=N$ is \\eqref{pair2}, and where \\eqref{dominate3} for each $J \\subseteq N$ with $|J| \\in \\{1, \\ldots, n-1\\}$ are the ${n \\choose |J|}$ facets obtained by permutations of $\\x[\\beta]$ in \\eqref{dominate2} when $(t+1)=|J|.$ The desired representation of $\\conv{G}$ is then the two inequalities $-1 \\leq y \\leq 1$ of \\eqref{trivial}, the $2^n$ inequalities of \\eqref{dominate3} and, for the case in which $n \\geq 3,$ the $2n$ inequalities $-1 \\leq x_j \\leq 1 \\; \\forall \\; j \\in N.$ We can combine and more succinctly write the $(2+2^n+2n)$ inequalities of \\eqref{trivial}, \\eqref{dominate3}, and the restrictions $-1 \\leq x_j \\leq 1$ for $j \\in N$ for the cases having $n \\geq 3,$ by letting the variable $x_{n+1}$ denote $y$ and by letting $N^{\\prime} = \\{1, \\ldots, n+1\\}.$ Using this new definition of variables, \\eqref{dominate3} can be partitioned in terms of the exponents $(n-|J|)$ as\n\\begin{equation}\n(n-1)-\\left(\\sum_{j \\in J}x_j-\\sum_{j \\in (N\\setminus J) \\cup \\{n+1\\}}x_j\\right) \\geq 0,\\quad \\forall \\; J \\subseteq N \\mbox{ with }n-|J| \\mbox{ even}, \\nonumber\n\\end{equation}\nand\n\\begin{equation}\n(n-1)-\\left(\\sum_{j \\in J\\cup \\{n+1\\}}x_j-\\sum_{j \\in N\\setminus J}x_j\\right) \\geq 0,\\quad \\forall \\; J \\subseteq N \\mbox{ with }n-|J| \\mbox{ odd}. \\nonumber\n\\end{equation}\nConsequently, $-1 \\leq x_j \\leq 1$ for $j \\in N,$ \\eqref{trivial}, and \\eqref{dominate3} can be expressed as \\eqref{con258} and \\eqref{con269} below, where \\eqref{con258} encompasses $-1 \\leq x_j \\leq 1$ for $j \\in N$ and \\eqref{trivial}, and \\eqref{con269} encompasses the above two families of inequalities.\n\\begin{eqnarray}\n-1 \\leq x_j \\leq 1 \\; &\\forall \\; j \\in N^{\\prime} \\label{con258}\\\\\n(n-1)-\\left(\\sum_{j \\in J}x_j-\\sum_{j \\in N^{\\prime}\\setminus J}x_j\\right) \\geq 0, &\\quad \\forall \\;J \\subseteq N^{\\prime} \\mbox{ with } n+1-|J| \\mbox{ odd} \\label{con269}\n\\end{eqnarray}\nOf course, if $n=2,$ then \\eqref{con258} simplifies to $-1 \\leq x_{n+1} \\leq 1.$\n\nNow we use \\mythref{exact1111} and \\mythref{exactly1111} to identify, for each of the $(n+3)$ core facets \\eqref{trivial}, \\eqref{pair1}, \\eqref{pair2}, and \\eqref{dominate2} of \\mythref{resulting3}, the set of all points $(\\x,y) \\in G$ that satisfies it exactly. \n\n\\begin{theorem}\\thlabel{finalet}\nFor $m(\\x)=\\prod_{j=1}^nx_j$ and $-\\ell=u=1,$ $(\\tilde{\\x},\\tilde{y}) \\in G$ satisfies the core facet\n\\begin{enumerate}\n\t\\item \\eqref{trivial} exactly if and only if $\\tilde{\\x}$ contains\n\t\\begin{enumerate}\n\t\\item even number of entries of value $-1$ and the remaining entries of value 1 when $\\beta^{\\prime}=-1,$ \n\t\\item odd number of entries of value $-1$ and the remaining entries of value 1 when $\\beta^{\\prime}=1,$ \n\t\\end{enumerate}\n\t\\item \\eqref{pair1} exactly if and only if $\\tilde{\\x}$ contains at least $(n-1)$ entries of value $-1,$ \n\t\\item \\eqref{pair2} exactly if and only if $\\tilde{\\x}$ contains at least $(n-1)$ entries of value $1,$\n\t\\item \\eqref{dominate2} for $t \\in \\{0, \\ldots, n-2\\}$ if and only if $\\tilde{\\x}$ differs from $\\x^{t+1}$ in at most one entry.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nEach statement is considered separately. \n\\begin{enumerate}\n\\item Follows directly from the definition of $m(\\x)$ when $-\\ell =u=1.$ \n\\item \\mythref{foundation900} and its proof give us that \\eqref{pair1} is satisfied exactly at $(\\tilde{\\x},\\tilde{y})=(\\x^{k},m(\\x^{k}))$ for $k \\in \\{0,1\\},$ but at no other $k.$ Then \\mythref{exactly1111} with $\\bar{\\beta}=1$ and $\\tilde{\\x}$ having $p=0$ and $b=(n-1)$ gives the result.\n\\item \\mythref{foundation900} and its proof give us that \\eqref{pair2} is satisfied exactly at $(\\tilde{\\x},\\tilde{y})=(\\x^{k},m(\\x^{k}))$ for $k \\in \\{n-1,n\\},$ but at no other $k.$ Then \\mythref{exactly1111} with $\\bar{\\beta}=-1$ and $\\tilde{\\x}$ having $p=(n-1)$ and $b=0$ gives the result.\n\\item For each $t \\in \\{0, \\ldots, n-2\\},$ \\mythref{resulting} and its proof give us that the associated inequality of \\eqref{dominate2} is satisfied exactly at $(\\tilde{\\x},\\tilde{y})=(\\x^{k},m(\\x^{k}))$ for $k \\in \\{t,t+1,t+2\\},$ but at no other $k.$ Then \\mythref{exact1111} with $\\tilde{\\x}$ having $p=t$ and $b=(n-t-1)$ allows $\\tilde{\\x},$ with suitable $r$ and $s,$ to differ from $x^{t+1}$ in only a single entry $j$ less than or equal to $(t+1).$ Similarly, \\mythref{exact1111} with $\\tilde{\\x}$ having $p=(t+1)$ and $b=(n-t-1)$ allows $\\tilde{\\x},$ with suitable $r$ and $s,$ to differ from $x^{t+1}$ in only a single entry $j$ greater than or equal to $(t+1).$ Combining these outcomes gives the result.\\qedhere\n\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Proof of the Convex Hull}\n\nWe prove \\mythref{resulting3} using three main lemmas. Let us begin with two simple observations. First, there exist no facets \\eqref{sum24} with $\\beta^{\\prime}=0$ if $n=2,$ and that there exist $2n$ such facets of the form $-1 \\leq x_j \\leq 1$ for $j \\in N$ if $n \\geq 3.$ Second, the two inequalities of \\eqref{trivial} are core facets. They are trivially valid, and they are core facets by \\mythref{equalcoeff} because each has $\\beta_j=0$ for all $j \\in N,$ and each is satisfied exactly at two extreme points $\\x^k$ of $\\mathcal{S},$ with one point not being $\\x^0$ or $\\x^n;$ equality occurs at $\\x^{n-2}$ and $\\x^n$ for the left inequality and occurs at $\\x^{n-3}$ and $\\x^{n-1}$ for the right inequality. \n\nNow, to facilitate the derivation of the remaining core facets and thereby characterize $\\conv{G},$ we define, for each $(p,q),p 0.$ Combining, $0 < 2\\bar{\\beta}+\\beta^{\\prime}\\left(m(\\x^{t+2})-m(\\x^{t+1})\\right)< 0,$ which is not possible.\n\\item\n$t=0.$ We have $D(0,1)=2\\bar{\\beta}+\\beta^{\\prime}\\left(2(-1)^{n+1}\\right)=0,$ with the first equality holding since $\\mk[0]=-m(\\x^{1})=(-1)^{n}.$ Then $\\bar{\\beta}=\\beta^{\\prime}(-1)^n$ and \\eqref{sum24} evaluated at $(\\x,y)=(\\x^0,\\mk[0])$ gives $\\beta_0=\\beta^{\\prime}\\left((-1)^n(n-1)\\right)$ because $\\mk[0]=(-1)^{n}.$ Inequality \\eqref{sum24} becomes \n\\begin{equation}\n\\beta^{\\prime}\\left((-1)^n(n-1)+(-1)^n\\sum_{j=1}^nx_j+y\\right) \\geq 0. \\nonumber\n\\end{equation} \nThis inequality is valid when $\\beta^{\\prime} = (-1)^n$ by the ``if\" direction of \\mythref{foundation} since it holds true at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{0, \\ldots, n\\},$ but it is invalid when $\\beta^{\\prime}=(-1)^{n+1}$ since it is then violated at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{2, \\ldots, n\\}.$ The inequality with $\\beta^{\\prime} = (-1)^n$ is \\eqref{pair1}, and is a facet by the ``if\" direction of \\mythref{equalcoeff} since it holds exactly at $(\\x,y)=(\\x^k, \\mk)$ for $k \\in \\{0,1\\}.$\n\\item\n$t=(n-1).$ We have $D(n-1,n)=2\\bar{\\beta}+2\\beta^{\\prime}=0,$ with the first equality holding since $-m(\\x^{n-1})=\\mk[n]=1.$ Then $\\bar{\\beta}=-\\beta^{\\prime}$ and \\eqref{sum24} evaluated at $(\\x,y)=(\\x^{n-1},m(\\x^{n-1}))$ gives $\\beta_0=\\beta^{\\prime}(n-1)$ because $m(\\x^{n-1})=-1.$ Inequality \\eqref{sum24} becomes \n\\begin{equation}\n\\beta^{\\prime}\\left((n-1)-\\sum_{j=1}^nx_j+y\\right) \\geq 0. \\nonumber\n\\end{equation} \nThis inequality is valid when $\\beta^{\\prime}=1$ by the ``if\" direction of \\mythref{foundation} since it holds true at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{0, \\ldots, n\\},$ but it is invalid when $\\beta^{\\prime}=-1$ since it is then violated at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{0, \\ldots, n-2\\}.$ The inequality with $\\beta^{\\prime} = 1$ is \\eqref{pair2}, and is a facet by the ``if\" direction of \\mythref{equalcoeff} since it holds exactly at $(\\x,y)=(\\x^{k}, \\mk)$ for $k \\in \\{n-1,n\\}.$ \\qedhere\n\\end{itemize}\n\\end{proof} \n\nThe below result builds further by providing, in terms of the extreme points $\\x^k$ of $\\mathcal{S},$ necessary conditions for valid core inequalities that are not found in \\eqref{trivial}, \\eqref{pair1}, or \\eqref{pair2} to be facets. \n\n\\begin{lemma}\t\\thlabel{foundation15}\nEvery core facet for $\\conv{G}$ which is not found in \\eqref{trivial}, \\eqref{pair1}, or \\eqref{pair2} is satisfied exactly at $(\\x^k,\\mk)$ for three consecutive extreme points $\\x^k$ of the simplex $\\mathcal{S},$ and for no other such extreme points.\n\\end{lemma}\n\\begin{proof}\n\\mythref{foundation22} gives us that a core facet is satisfied exactly at $(\\x^k,\\mk)$ for at least two extreme points of the simplex $\\mathcal{S}.$ Let $r$ and $s$ be \nas in \\mythref{equalcoeff8}. Let $d=(s-r) \\geq 1$ denote the difference between $s$ and $r.$ \\mythref{foundation900} exhausts the cases having $d=1,$ and so it is sufficient to show three results: a core facet having $d=2$ that is not of the form \\eqref{trivial} must be satisfied exactly at $(\\x,y)=(\\x^{r+1},\\mk[r+1]),$ there exists no valid inequality having $d \\geq 3$ and $d$ odd, and a core facet having $d \\geq 4$ and $d$ even must be of the form \\eqref{trivial}. \n\\begin{itemize}\n\\item Suppose that a core facet $\\beta_0+\\sum_{j=1}^n\\beta_jx_j +\\beta^{\\prime}y \\geq 0$ has $d=2$ and is not of the form \\eqref{trivial}. We have $\\beta_1 = \\ldots = \\beta_{r+1}$ and $\\beta_{r+2}=\\ldots=\\beta_n$ by \\mythref{equalcoeff8}, so that we cannot have $\\beta_{r+1}=\\beta_{r+2}=0$ since the facet would be of the form \\eqref{trivial}. As a result, $D(r,r+2)=0$ with $\\mk[r]=m(\\x^{r+2})$ gives $0<-\\beta_{r+1} = \\beta_{r+2}=\\bar{\\beta}$ for some $\\bar{\\beta}>0.$ Then $D(r+1,r+2) \\leq 0$ gives $\\bar{\\beta} \\leq \\beta^{\\prime}\\mk[r+1]$ because $m(\\x^{r+2}) = -\\mk[r+1]$ and, since $0 < \\bar{\\beta}$ and $\\mk[r+1]=(-1)^{n-(r+1)},$ we have $0 < \\bar{\\beta}\\leq \\beta^{\\prime}\\mk[r+1]=\\beta^{\\prime}(-1)^{n-(r+1)}$ so that $\\beta^{\\prime}=(-1)^{n-(r+1)}.$ Consequently, the core facet takes the form \n\\begin{equation} \n\\beta_0-\\bar{\\beta}\\left(\\sum_{j=1}^{r+1}x_j-\\sum_{j=r+2}^{n}x_j\\right)+(-1)^{n-(r+1)}y \\geq 0. \\label{trivia0}\n\\end{equation} \nWe now use \\mythref{equalcoeff7} to show that the core facet must be satisfied exactly at $(\\x,y)=(\\x^{r+1},\\mk[r+1])$ by setting $\\beta_0 = (n-1)$ and $\\bar{\\beta}=1$ within \\eqref{trivia0}. The resulting inequality is valid for $\\conv{G}$ and is satisfied exactly at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{r,r+1,r+2\\}.$ The validity follows from \\mythref{foundation} since, for any $r \\in \\{0, \\ldots, n-2\\},$ the left side is $2|k-(r+1)|-1+(-1)^{k-(r+1)}$ at $(\\x,y)=\\left(\\x^k,\\mk\\right)$ for each $k \\in \\{0, \\ldots, n\\}$ because $\\mk=(-1)^{n-k}.$ \n\n\\item By contradiction, suppose there exists a valid inequality $\\beta_0+\\sum_{j=1}^n\\beta_jx_j +\\beta^{\\prime}y \\geq 0$ having $d \\geq 3$ and $d$ odd. Then $D(r,s)=0$ gives $\\sum_{j=r+1}^s\\beta_j=\\beta^{\\prime}\\mk[r]$ since $\\mk[r]=-\\mk[s].$ If $\\beta^{\\prime}\\mk[r]=1$ so that $\\beta^{\\prime}\\left(\\mk[r+1]-\\mk[r]\\right)=-2,$ then $D(r,r+1) \\geq 0$ gives $\\beta_{r+1} \\geq 1,$ and nondecreasing $\\beta_j$ gives $\\beta_{r+1} \\leq \\frac{1}{d}.$ Thus, a contradiction. Similarly, if $\\beta^{\\prime}\\mk[r]=-1$ so that $\\beta^{\\prime}\\left(\\mk[s]-m(\\x^{s-1})\\right)=2,$ then $D(s-1,s) \\leq 0$ gives $\\beta_{s} \\leq -1,$ and nondecreasing $\\beta_j$ gives $\\beta_{s} \\geq \\frac{-1}{d}.$ Again a contradiction.\n\\item Suppose that a core facet $\\beta_0+\\sum_{j=1}^n\\beta_jx_j +\\beta^{\\prime}y \\geq 0$ has $d \\geq 4$ and $d$ even. The value $D(r,s)=0$ gives $\\sum_{j=r+1}^s \\beta_j=0$ since $\\mk[r]=\\mk[s].$ But $D(r,r+2) \\geq 0$ gives $\\left(\\beta_{r+1}+\\beta_{r+2}\\right) \\geq 0$ since $\\mk[r]=m(\\x^{r+2}),$ and the nondecreasing $\\beta_j$ gives $\\beta_j=0$ for $j \\in \\{r+1, \\ldots, s\\}.$ Then $\\beta_j=0$ for $j \\in N$ by \\mythref{equalcoeff8}, and the facet is of the form \\eqref{trivial}. \\qedhere\n\\end{itemize}\n\\end{proof}\n\nA key component of the proof of \\mythref{foundation15} is the family of $(n-1)$ valid core inequalities \\eqref{trivia0} with $\\beta_0 = (n-1)$ and $\\bar{\\beta}=1$ that has the property that, for each $r \\in \\{0, \\ldots, n-2\\},$ the corresponding inequality is satisfied exactly at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{r,r+1,r+2\\}.$ It turns out that each such inequality is a core facet, and that there exist no other core facets that are satisfied exactly at $(\\x^k,\\mk)$ for three consecutive extreme points $\\x^k$ of the simplex $\\mathcal{S}.$ This result is established below. Here, we find it convenient to substitute the index $t$ in \\eqref{dominate2} for $r$ in \\eqref{trivia0}.\n\n\\begin{lemma}\t\\thlabel{resulting}\nThere exist precisely $(n-1)$ core facets that are satisfied exactly at $(\\x^k,\\mk)$ for three consecutive extreme points $\\x^k$ of the simplex $\\mathcal{S},$ and these inequalities are~\\eqref{dominate2}.\n\\end{lemma}\n\\begin{proof}\nThe proof consists of two parts: the first part shows that inequalities \\eqref{dominate2} are the only candidate core facets that are satisfied exactly at three consecutive extreme points of the simplex $\\mathcal{S},$ and the second part shows that each such inequality is a core facet.\n\nSuppose, for some $t \\in \\{0,\\ldots, n-2\\},$ that $\\beta_0+\\sum_{j=1}^n\\beta_jx_j +\\beta^{\\prime}y \\geq 0$ is a core facet that is satisfied exactly at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{t,t+1,t+2\\}.$ Then $d=2$ in the proof of \\mythref{foundation15} since $d \\ge 3,$ $d$ odd, was shown not possible, and $d \\geq 4,$ $d$ even, was shown to yield a core facet of the form \\eqref{trivial}. Clearly, the facet is not of the form \\eqref{trivial} since $D(t,t+1) = 0$ gives $\\beta_{t+1}=\\beta^{\\prime}m(\\x^t) \\neq 0.$ As a result, the first part of the proof of \\mythref{foundation15} gives us that the facet must be of the form \\eqref{trivia0}. The two equations in two unknowns $\\beta_0$ and $\\bar{\\beta}$ that are obtained by evaluating $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{t,t+1\\}$ within \\eqref{trivia0} give $\\beta_0=(n-1)$ and $\\bar{\\beta}=1$ as in \\eqref{dominate2}, since $-m(\\x^t)=m(\\x^{t+1})=(-1)^{n-(t+1)}.$\n\nThe proof of \\mythref{foundation15} showed inequalities \\eqref{dominate2} to be valid for $\\conv{G}$ and, for each $t \\in \\{0,\\ldots, n-2\\},$ the associated inequality to be satisfied exactly at $(\\x,y)=(\\x^k,\\mk)$ for $k \\in \\{t, t+1, t+2\\}.$ Thus, given such a $t,$ it is sufficient to identify $(n+1)$ affinely independent points in $\\conv{G}$ that satisfy the inequality exactly. Consider the $n$ extreme points of $\\conv{G},$ denoted by $(\\x[p]^1,m(\\x[p]^1)), \\ldots, (\\x[p]^n,m(\\x[p]^n)),$ so that $\\x[p]^i$ differs from $\\x^{t+1}$ in only position $i.$ Since $\\beta_1= \\ldots = \\beta_{t+1}$ and $(\\x,y)=(\\x^t,m(\\x^t))$ satisfies the inequality exactly, \\mythref{foundation45} gives us that $(\\x,y)=(\\x[p]^i,m(\\x[p]^i))$ for $i \\in \\{1,\\ldots, t+1\\}$ satisfies the inequality exactly. Similarly, since $\\beta_{t+2}= \\ldots = \\beta_n$ and $(\\x,y)=(\\x^{t+1},m(\\x^{t+1}))$ satisfies the inequality exactly, \\mythref{foundation45} gives us that $(\\x,y)=(\\x[p]^i,m(\\x[p]^i))$ for $i \\in \\{t+2,\\ldots, n\\}$ satisfies the inequality exactly. Subtract $\\x^{t+1}$ from every such point to reduce $\\x[p]^i$ to $-2\\boldsymbol{e}^i$ for $i \\leq t+1$ and to $2\\boldsymbol{e}^i$ for $i \\geq t+2,$ where $\\boldsymbol{e}^i$ is the unit vector in $\\real^n$ having a 1 in position $i$ and $0$ elsewhere. Hence, $\\x^{t+1},$ together with $\\x[p]^1, \\ldots, \\x[p]^n,$ is an affinely independent set of points, so that $(\\x^{t+1},m(\\x^{t+1})),$ together with $(\\x[p]^1,m(\\x[p]^1)), \\ldots, (\\x[p]^n,m(\\x[p]^n)),$ is an affinely independent set of points.\n\\end{proof}\n\n\\begin{proof}[\\textbf{Proof of \\mythref{resulting3}}]\nFollows from \\mythref{foundation900,foundation15,resulting}.\n\\end{proof}\n\n\n\\section{Summary and Open Questions}\n\nThis paper derives polyhedral results for the convex hull of symmetric multilinear polynomials (SMPs) taken over a box domain. \\akshay{Exponential-sized extended formulations of general multilinear polynomials are available via the reformulation-linearization-technique (RLT), but symmetry and disjunctive programming enable a quadratic-sized extended formulation.} The goal of this paper is to obtain the convex hulls in the original variable spaces. Instead of adopting the tedious method of projecting the extended formulations, our approach is more elegant in the sense that we directly exploit the problem structure to define special core facets by which all facets can be characterized, and to then devise necessary and\/or sufficient conditions on the coefficients of these facets. Whereas much of the theory is applicable to general SMPs over box constraints, we focus attention on two special problem classes: general supermodular (submodular) functions, and monomials having the variable lower bound equal to the negative of the upper bound. For each class, we use the necessary conditions to motivate families of core facets, and then prove that no other such facets can exist. \\akshay{Our derivations of these convex hulls provides alternate proofs to those in literature.} For both classes, we use RLT results to characterize for each facet the set of all points at which the inequality is satisfied exactly.\n\nA direction of future research is the identification of convex hull forms for more general families of SMPs than the two types within this paper, and likewise for the identification of all points within the resulting graphs that satisfy each facet exactly. \\akshay{One open question in this regard is to generalize \\mythref{resulting3} to monomials taken over $[-1,r]^{n}$ for arbitrary $r > 0$. Even more broadly, we do not know an explicit minimal description for convex hulls of general SMPs in the original variable space. Projecting the extended formulation of \\mythref{extform} is an option, but this is likely to result in a combinatorial explosion as is usually the case when projecting from a higher-dimensional space. Another question that is open is whether similar to the two families of SMPs analysed in this paper, every SMP has a linear (or even polynomial) number of core facets. A positive answer to this question would imply a straightforward separation algorithm for the convex hull due to \\mythref{sepcompl} without invoking the ellipsoid method and the optimization algorithm in \\mythref{optcompl}.}\n\n\\paragraph{Acknowledgement.}\nThis research was initiated when the authors were in the School of Mathematical and Statistical Sciences at Clemson University, USA, during which the first two authors (YX and WA) were supported by ONR grant N00014-16-1-2168 and the third author (AG) was supported by ONR grant N00014-16-1-2725.\n\n{\n\\newrefcontext[sorting=nyt]\n\\printbibliography[heading=bibliography]\n}\n\n\\newpage\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\setcounter{equation}{0}\n\n\\begin{appendices}\n\n\\section{Properties of $\\conv{G}$ from RLT}\t\\label{sec:propG}\n\nThe key observation is that the equivalence of \\eqref{miss1} between the polyhedral sets $P$ of \\eqref{RLTstep2} and $\\conv{T}$ of \\eqref{miss1} continues to hold true with the inclusion of the restriction $y= m(\\x)$ within these two sets. Consider a generalization of the set $G$ given by \n\\begin{equation}\nG^{\\prime} \\equiv\\left\\{(\\x,y)\\in \\real^n \\times \\real: \\x \\in X^{\\prime}, \\; y= m(\\x)\\right\\}, \\label{marker}\n\\end{equation}\nthat is obtained by replacing $\\x \\in X$ of \\eqref{Xdef} with $\\x \\in X^{\\prime}$ of \\eqref{Xprimedef}. Let\n\\begin{equation}\nP_y\\equiv\\left\\{(\\x,\\x[w],y) \\in \\real^n\\times\\real^{2^n-(n+1)}\\times \\real:(\\x,\\x[w]) \\in P, \\; y=\\left\\{m(\\x)\\right\\}_L\\right\\} \\label{handy}\n\\end{equation}\nbe the set $P$ of \\eqref{RLTstep2} that is modified to include the additional variable $y$ and additional restriction $y=\\left\\{m(\\x)\\right\\}_L,$ and let \n\\begin{equation}\nT_y\\equiv\\left\\{(\\x,\\x[w],y) \\in \\real^n\\times\\real^{2^n-(n+1)}\\times \\real:(\\x,\\x[w]) \\in T, \\; y=m(\\x)\\right\\} \\nonumber\n\\end{equation}\nbe the set $T$ of \\eqref{miss1} that is modified to include the additional variable $y$ and additional restriction $y=m(\\x).$ Then we have that\n\\begin{equation}\nP_y=conv\\left(T_y \\right). \\label{pete}\n\\end{equation}\nEquality \\eqref{pete} holds true from \\eqref{miss1} because the extreme points of $P$ and $P_y$ correspond in a one-to-one fashion in such a manner that $(\\x,\\x[w])$ is an extreme point of $P$ if and only if $(\\x,\\x[w],y)$ is an extreme point of $P_y$ having $y=\\left\\{m(\\x)\\right\\}_L=m(\\x).$ Therefore, every extreme point of the polytope $P_y$ is in $T_y$ and $T_y \\subseteq P_y$ by construction. \n\nThe three propositions below are consequences of \\eqref{pete}, where $\\mbox{Proj}_{(\\x,y)}(\\bullet)$ denotes the projection of the set $\\bullet$ onto the space of the variables $(\\x,y).$ \n\n\\begin{proposition}\t\\thlabel{equalities}\n$\\conv{G^{\\prime}} = \\conv{\\left(\\mbox{Proj}_{(\\x,y)}(T_y)\\right)} = \\mbox{Proj}_{(\\x,y)}\\left(\\conv{T_y}\\right) = \\mbox{Proj}_{(\\x,y)}\\left(P_y\\right)$. \n\\end{proposition}\n\\begin{proof}\nThe first equality follows because $G^{\\prime}=\\mbox{Proj}_{(\\x,y)}(T_y),$ the second equality follows from interchanging the projection and convex hull operators, and the third equality follows from \\eqref{pete}.\n\\end{proof} \n\n\\begin{proposition}\t\\thlabel{equalities21}\n$conv\\left(G^{\\prime}\\right)$ is a polytope with $2^n$ extreme points that have one-to-one correspondence with the extreme points of $X^{\\prime}$ in such a manner that $y=m(\\x)$ at each such point $\\x.$ \n\\end{proposition}\n\\begin{proof}\n$conv\\left(G^{\\prime}\\right)$ is a polytope with no more than $2^n$ extreme points since it is the projection of the polytope $P_y$ having $2^n$ extreme points onto the $(\\x,y)$ space, as stated in \\mythref{equalities}. However, each of the $2^n$ points of $conv\\left(G^{\\prime}\\right)$ satisfying the property that every entry of $\\x$ has either $x_j=L_j$ or $x_j=U_j$ for all $j \\in N,$ and that $y=m(\\x),$ is trivially an extreme point of $conv\\left(G^{\\prime}\\right).$\n\\end{proof} \n\n\\mythref{equalities21} also follows from results in \\citet{rikun1997convex}. As opposed to using the projection from a higher-dimensional RLT space as in the above proof, \\citeauthor{rikun1997convex} shows that every $(\\x,y) \\in G^{\\prime}$ which has some $x_j \\in (L_j,U_j)$ can be expressed as a strict convex combination of two distinct points in $G^{\\prime}.$\n\nThe following result addresses the validity of a linear inequality for $conv\\left(G^{\\prime}\\right)$ in terms of the restrictions of the set $P_y.$\n\n\\begin{corollary}\t\\thlabel{equalities24}\nA linear inequality $\\beta_0+\\sum_{j=1}^n\\beta_jx_j+\\beta^{\\prime}y \\geq 0$ is valid for $conv\\left(G^{\\prime}\\right)$ if and only if it can be uniquely expressed as a linear combination of the restrictions of $P_y$ using nonnegative multipliers $\\x[\\pi] \\in \\real^{2^n}$ and the scalar $\\beta^{\\prime},$ so that\n\\begin{equation}\n\\beta_0+\\sum_{j=1}^n\\beta_jx_j+\\beta^{\\prime}y=\\sum_{K \\subseteq N}\\pi_{K}F_{K}(x)+\\beta^{\\prime}\\left(y-\\left\\{m(\\x)\\right\\}_L\\right). \\label{ruby}\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nThe existence of nonnegative multipliers $\\x[\\pi]$ satisfying \\eqref{ruby} follows from the result of \\mythref{equalities} that $\\conv{G^{\\prime}}\\mbox{Proj}_{(\\x,y)}\\left(P_y\\right),$ as then a linear inequality will be valid for $\\conv{G^{\\prime}}$ if and only if it is valid for $P_y.$ The uniqueness follows from the invertibility of the matrix $\\scriptsize\\left[\\begin{array}{cc} U_1 & -1 \\\\ -L_1 & 1 \\end{array}\\right] \\normalsize \\otimes \\ldots \\otimes \\scriptsize \\left[\\begin{array}{cc} U_n & -1 \\\\ -L_n & 1 \\end{array}\\right]$ of \\eqref{ca4}.\n\\end{proof}\n\n\\input{Examples}\n\\end{appendices}\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}