query_id
stringlengths
1
6
query
stringlengths
2
185
positive_passages
listlengths
1
121
negative_passages
listlengths
15
100
1961513
A Framework for Clustering Uncertain Data
[ { "docid": "pos:1961513_0", "text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.", "title": "" }, { "docid": "pos:1961513_1", "text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.", "title": "" } ]
[ { "docid": "neg:1961513_0", "text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.", "title": "" }, { "docid": "neg:1961513_1", "text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.", "title": "" }, { "docid": "neg:1961513_2", "text": "Owing to inevitable thermal/moisture instability for organic–inorganic hybrid perovskites, pure inorganic perovskite cesium lead halides with both inherent stability and prominent photovoltaic performance have become research hotspots as a promising candidate for commercial perovskite solar cells. However, it is still a serious challenge to synthesize desired cubic cesium lead iodides (CsPbI3) with superior photovoltaic performance for its thermodynamically metastable characteristics. Herein, polymer poly-vinylpyrrolidone (PVP)-induced surface passivation engineering is reported to synthesize extra-long-term stable cubic CsPbI3. It is revealed that acylamino groups of PVP induce electron cloud density enhancement on the surface of CsPbI3, thus lowering surface energy, conducive to stabilize cubic CsPbI3 even in micrometer scale. The cubic-CsPbI3 PSCs exhibit extra-long carrier diffusion length (over 1.5 μm), highest power conversion efficiency of 10.74% and excellent thermal/moisture stability. This result provides important progress towards understanding of phase stability in realization of large-scale preparations of efficient and stable inorganic PSCs. Inorganic cesium lead iodide perovskite is inherently more stable than the hybrid perovskites but it undergoes phase transition that degrades the solar cell performance. Here Li et al. stabilize it with poly-vinylpyrrolidone and obtain high efficiency of 10.74% with excellent thermal and moisture stability.", "title": "" }, { "docid": "neg:1961513_3", "text": "Glomus tumors of the penis are extremely rare. A patient with multiple regional glomus tumors involving the penis is reported. A 16-year-old boy presented with the complaint of painless penile masses and resection of the lesions was performed. The pathologic diagnosis was glomus tumor of the penis. This is the ninth case of glomus tumor of the penis to be reported in the literature.", "title": "" }, { "docid": "neg:1961513_4", "text": "The global gold market has recently attracted a lot of attention and the price of gold is relatively higher than its historical trend. For mining companies to mitigate risk and uncertainty in gold price fluctuations, make hedging, future investment and evaluation decisions, depend on forecasting future price trends. The first section of this paper reviews the world gold market and the historical trend of gold prices from January 1968 to December 2008. This is followed by an investigation into the relationship between gold price and other key influencing variables, such as oil price and global inflation over the last 40 years. The second section applies a modified econometric version of the longterm trend reverting jump and dip diffusion model for forecasting natural-resource commodity prices. This method addresses the deficiencies of previous models, such as jumps and dips as parameters and unit root test for long-term trends. The model proposes that historical data of mineral commodities have three terms to demonstrate fluctuation of prices: a long-term trend reversion component, a diffusion component and a jump or dip component. The model calculates each term individually to estimate future prices of mineral commodities. The study validates the model and estimates the gold price for the next 10 years, based on monthly historical data of nominal gold price. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1961513_5", "text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.", "title": "" }, { "docid": "neg:1961513_6", "text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damage and to eliminate risks of safety hazards. This paper focuses on line–line faults in PV arrays that may be caused by short-circuit faults or double ground faults. The effect on fault current from a maximum-power-point tracking of a PV inverter is discussed and shown to, at times, prevent overcurrent protection devices (OCPDs) to operate properly. Furthermore, fault behavior of PV arrays is highly related to the fault location, fault impedance, irradiance level, and use of blocking diodes. Particularly, this paper examines the challenges to OCPD in a PV array brought by unique faults: One is a fault that occurs under low-irradiance conditions, and the other is a fault that occurs at night and evolves during “night-to-day” transition. In both circumstances, the faults might remain hidden in the PV system, no matter how irradiance changes afterward. These unique faults may subsequently lead to unexpected safety hazards, reduced system efficiency, and reduced reliability. A small-scale experimental PV system has been developed to further validate the conclusions.", "title": "" }, { "docid": "neg:1961513_7", "text": "In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.", "title": "" }, { "docid": "neg:1961513_8", "text": "We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show the effectiveness of the proposed loss in generating more faithful part reconstructions while also improving segmentation accuracy. We thoroughly evaluate the proposed approach on different object categories from the ShapeNet dataset to obtain improved results in reconstruction as well as segmentation. Codes are available at https://github.com/val-iisc/3d-psrnet.", "title": "" }, { "docid": "neg:1961513_9", "text": "An ego-motion estimation method based on the spatial and Doppler information obtained by an automotive radar is proposed. The estimation of the motion state vector is performed in a density-based framework. Compared to standard vehicle odometry the approach is capable to estimate the full two dimensional motion state with three degrees of freedom. The measurement of a Doppler radar sensor is represented as a mixture of Gaussians. This mixture is matched with the mixture of a previous measurement by applying the appropriate egomotion transformation. The parameters of the transformation are found by the optimization of a suitable join metric. Due to the Doppler information the method is very robust against disturbances by moving objects and clutter. It provides excellent results for highly nonlinear movements. Real world results of the proposed method are presented. The measurements are obtained by a 77GHz radar sensor mounted on a test vehicle. A comparison using a high-precision inertial measurement unit with differential GPS support is made. The results show a high accuracy in velocity and yaw-rate estimation.", "title": "" }, { "docid": "neg:1961513_10", "text": "Numerous studies have established that aggregating judgments or predictions across individuals can be surprisingly accurate in a variety of domains, including prediction markets, political polls, game shows, and forecasting (see Surowiecki, 2004). Under Galton’s (1907) conditions of individuals having largely unbiased and independent judgments, the aggregated judgment of a group of individuals is uncontroversially better, on average, than the individual judgments themselves (e.g., Armstrong, 2001; Clemen, 1989; Galton, 1907; Surowiecki, 2004; Winkler, 1971). The boundary conditions of crowd wisdom, however, are not as well-understood. For example, when group members are allowed access to other members’ predictions, as opposed to making them independently, their predictions become more positively correlated and the crowd’s performance can diminish (Lorenz, Rauhut, Schweitzer, & Helbing, 2011). In the context of handicapping sports results, individuals have been found to make systematically biased predictions, so that their aggregated judgments may not be wise (Simmons, Nelson, Galak, & Frederick, 2011). How robust is crowd wisdom to factors such as non-independence and bias of crowd members’ judgments? If the conditions for crowd wisdom are less than ideal, is it better to aggregate judgments or, for instance, rely on a skilled individual judge? Would it be better to add a highly skilled crowd member or a less skilled one who makes systematically different predictions than other members, increasing diversity? We provide a simple, precise definition of the wisdom-of-the-crowd effect and a systematic way to examine its boundary conditions. We define a crowd as wise if a linear aggregate of its members’ judgments of a criterion value has less expected squared error than the judgments of an individual sampled randomly, but not necessarily uniformly, from the crowd. Previous definitions of the wisdom of the crowd effect have largely focused on comparing the crowd’s accuracy to that of the average individual member (Larrick, Mannes, & Soll, 2012). Our definition generalizes prior approaches in a couple of ways. We consider crowds created by any linear aggregate, not just simple averaging. Second, our definition allows the comparison of the crowd to an individual selected according to a distribution that could reflect past individual performance, e.g., their skill, or other attributes. On the basis of our definition, we develop a framework for analyzing crowd wisdom that includes various aggregation and sampling rules. These rules include both weighting the aggregate and sampling the individual according to skill, where skill is operationalized as predictive validity, i.e., the correlation between a judge’s prediction and the criterion. Although the amount of the crowd’s wisdom the expected difference between individual error and crowd error is non-linear in the amount of bias and non-independence of the judgments, our results yield simple and general rules specifying when a simple average will be wise. While a simple average of the crowd is not always wise if individuals are not sampled uniformly at random, we show that there always exists some a priori aggregation rule that makes the crowd wise.", "title": "" }, { "docid": "neg:1961513_11", "text": "Driving simulators play an important role in the development of new vehicles and advanced driver assistance devices. In fact, on the one hand, having a human driver on a driving simulator allows automotive OEMs to bridge the gap between virtual prototyping and on-road testing during the vehicle development phase. On the other hand, novel driver assistance systems (such as advanced accident avoidance systems) can be safely tested by having the driver operating the vehicle in a virtual, highly realistic environment, while being exposed to hazardous situations. In both applications, it is crucial to faithfully reproduce in the simulator the drivers perception of forces acting on the vehicle and its acceleration. The strategy used to operate the simulator platform within its limited working space to provide the driver with the most realistic perception goes under the name of motion cueing. In this paper we describe a novel approach to motion cueing design that is based on Model Predictive Control techniques. Two features characterize the algorithm, namely, the use of a detailed model of the human vestibular system and a predictive strategy based on the availability of a virtual driver. Differently from classical schemes based on washout filters, such features allows a better implementation of tilt coordination and to handle more efficiently the platform limits.", "title": "" }, { "docid": "neg:1961513_12", "text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.", "title": "" }, { "docid": "neg:1961513_13", "text": "We utilise smart eyeglasses for dietary monitoring, in particular to sense food chewing. Our approach is based on a 3D-printed regular eyeglasses design that could accommodate processing electronics and Electromyography (EMG) electrodes. Electrode positioning was analysed and an optimal electrode placement at the temples was identified. We further compared gel and dry fabric electrodes. For the subsequent analysis, fabric electrodes were attached to the eyeglasses frame. The eyeglasses were used in a data recording study with eight participants eating different foods. Two chewing cycle detection methods and two food classification algorithms were compared. Detection rates for individual chewing cycles reached a precision and recall of 80%. For five foods, classification accuracy for individual chewing cycles varied between 43% and 71%. Majority voting across intake sequences improved accuracy, ranging between 63% and 84%. We concluded that EMG-based chewing analysis using smart eyeglasses can contribute essential chewing structure information to dietary monitoring systems, while the eyeglasses remain inconspicuous and thus could be continuously used.", "title": "" }, { "docid": "neg:1961513_14", "text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.", "title": "" }, { "docid": "neg:1961513_15", "text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.", "title": "" }, { "docid": "neg:1961513_16", "text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.", "title": "" }, { "docid": "neg:1961513_17", "text": "The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients.", "title": "" }, { "docid": "neg:1961513_18", "text": "One of the major restrictions on the performance of videobased person re-id is partial noise caused by occlusion, blur and illumination. Since different spatial regions of a single frame have various quality, and the quality of the same region also varies across frames in a tracklet, a good way to address the problem is to effectively aggregate complementary information from all frames in a sequence, using better regions from other frames to compensate the influence of an image region with poor quality. To achieve this, we propose a novel Region-based Quality Estimation Network (RQEN), in which an ingenious training mechanism enables the effective learning to extract the complementary region-based information between different frames. Compared with other feature extraction methods, we achieved comparable results of 92.4%, 76.1% and 77.83% on the PRID 2011, iLIDS-VID and MARS, respectively. In addition, to alleviate the lack of clean large-scale person re-id datasets for the community, this paper also contributes a new high-quality dataset, named “Labeled Pedestrian in the Wild (LPW)” which contains 7,694 tracklets with over 590,000 images. Despite its relatively large scale, the annotations also possess high cleanliness. Moreover, it’s more challenging in the following aspects: the age of characters varies from childhood to elderhood; the postures of people are diverse, including running and cycling in addition to the normal walking state.", "title": "" }, { "docid": "neg:1961513_19", "text": "Building curious machines that can answer as well as ask questions is an important challenge for AI. The two tasks of question answering and question generation are usually tackled separately in the NLP literature. At the same time, both require significant amounts of supervised data which is hard to obtain in many domains. To alleviate these issues, we propose a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning. We evaluate our approach on four benchmark datasets: SQUAD, MS MARCO, WikiQA and TrecQA, and show significant improvements over a number of established baselines on both question answering and question generation tasks. We also achieved new state-of-the-art results on two competitive answer sentence selection tasks: WikiQA and TrecQA.", "title": "" } ]
1961514
How competitive are you: Analysis of people's attractiveness in an online dating system
[ { "docid": "pos:1961514_0", "text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.", "title": "" }, { "docid": "pos:1961514_1", "text": "Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites.", "title": "" } ]
[ { "docid": "neg:1961514_0", "text": "The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (Some figures may appear in colour only in the online journal)", "title": "" }, { "docid": "neg:1961514_1", "text": "In this review we examine recent research in the area of motivation in mathematics education and discuss findings from research perspectives in this domain. We note consistencies across research perspectives that suggest a set of generalizable conclusions about the contextual factors, cognitive processes, and benefits of interventions that affect students’ and teachers’ motivational attitudes. Criticisms are leveled concerning the lack of theoretical guidance driving the conduct and interpretation of the majority of studies in the field. Few researchers have attempted to extend current theories of motivation in ways that are consistent with the current research on learning and classroom discourse. In particular, researchers interested in studying motivation in the content domain of school mathematics need to examine the relationship that exists between mathematics as a socially constructed field and students’ desire to achieve.", "title": "" }, { "docid": "neg:1961514_2", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "neg:1961514_3", "text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.", "title": "" }, { "docid": "neg:1961514_4", "text": "This report explores the relationship between narcissism and unethical conduct in an organization by answering two questions: (1) In what ways does narcissism affect an organization?, and (2) What is the relationship between narcissism and the financial industry? Research suggests the overall conclusion that narcissistic individuals directly influence the identity of an organization and how it behaves. Ways to address these issues are shown using Enron as a case study example.", "title": "" }, { "docid": "neg:1961514_5", "text": "This paper investigates the effectiveness of state-of-the-art classification algorithms to categorise road vehicles for an urban traffic monitoring system using a multi-shape descriptor. The analysis is applied to monocular video acquired from a static pole-mounted road side CCTV camera on a busy street. Manual vehicle segmentation was used to acquire a large (>2000 sample) database of labelled vehicles from which a set of measurement-based features (MBF) in combination with a pyramid of HOG (histogram of orientation gradients, both edge and intensity based) features. These are used to classify the objects into four main vehicle categories: car, van, bus and motorcycle. Results are presented for a number of experiments that were conducted to compare support vector machines (SVM) and random forests (RF) classifiers. 10-fold cross validation has been used to evaluate the performance of the classification methods. The results demonstrate that all methods achieve a recognition rate above 95% on the dataset, with SVM consistently outperforming RF. A combination of MBF and IPHOG features gave the best performance of 99.78%.", "title": "" }, { "docid": "neg:1961514_6", "text": "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses realtime approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.", "title": "" }, { "docid": "neg:1961514_7", "text": "The stiff man syndrome (SMS) and its variants, focal SMS, stiff limb (or leg) syndrome (SLS), jerking SMS, and progressive encephalomyelitis with rigidity and myoclonus (PERM), appear to occur more frequently than hitherto thought. A characteristic ensemble of symptoms and signs allows a tentative clinical diagnosis. Supportive ancillary findings include (1) the demonstration of continuous muscle activity in trunk and proximal limb muscles despite attempted relaxation, (2) enhanced exteroceptive reflexes, and (3) antibodies to glutamic acid decarboxylase (GAD) in both serum and spinal fluid. Antibodies to GAD are not diagnostic or specific for SMS and the role of these autoantibodies in the pathogenesis of SMS/SLS/PERM is the subject of debate and difficult to reconcile on the basis of our present knowledge. Nevertheless, evidence is emerging to suggest that SMS/SLS/PERM are manifestations of an immune-mediated chronic encephalomyelitis and immunomodulation is an effective therapeutic approach.", "title": "" }, { "docid": "neg:1961514_8", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: [email protected]. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: [email protected]. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: [email protected].", "title": "" }, { "docid": "neg:1961514_9", "text": "Touchless hand gesture recognition systems are becoming important in automotive user interfaces as they improve safety and comfort. Various computer vision algorithms have employed color and depth cameras for hand gesture recognition, but robust classification of gestures from different subjects performed under widely varying lighting conditions is still challenging. We propose an algorithm for drivers’ hand gesture recognition from challenging depth and intensity data using 3D convolutional neural networks. Our solution combines information from multiple spatial scales for the final prediction. It also employs spatiotemporal data augmentation for more effective training and to reduce potential overfitting. Our method achieves a correct classification rate of 77.5% on the VIVA challenge dataset.", "title": "" }, { "docid": "neg:1961514_10", "text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.", "title": "" }, { "docid": "neg:1961514_11", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "neg:1961514_12", "text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.", "title": "" }, { "docid": "neg:1961514_13", "text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.", "title": "" }, { "docid": "neg:1961514_14", "text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process.", "title": "" }, { "docid": "neg:1961514_15", "text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.", "title": "" }, { "docid": "neg:1961514_16", "text": "We develop several predictive models linking legislative sentiment to legislative text. Our models, which draw on ideas from ideal point estimation and topic models, predict voting patterns based on the contents of bills and infer the political leanings of legislators. With supervised topics, we provide an exploratory window into how the language of the law is correlated with political support. We also derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we predict specific voting patterns with high accuracy.", "title": "" }, { "docid": "neg:1961514_17", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "neg:1961514_18", "text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "title": "" }, { "docid": "neg:1961514_19", "text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.", "title": "" } ]
1961515
Sexuality before and after male-to-female sex reassignment surgery.
[ { "docid": "pos:1961515_0", "text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.", "title": "" } ]
[ { "docid": "neg:1961515_0", "text": "BACKGROUND\nPerforator-based flaps have been explored across almost all of the lower leg except in the Achilles tendon area. This paper introduced a perforator flap sourced from this area with regard to its anatomic basis and clinical applications.\n\n\nMETHODS\nTwenty-four adult cadaver legs were dissected to investigate the perforators emerging along the lateral edge of the Achilles tendon in terms of number and location relative to the tip of the lateral malleolus, and distribution. Based on the anatomic findings, perforator flaps, based on the perforator(s) of the lateral calcaneal artery (LCA) alone or in concert with the perforator of the peroneal artery (PA), were used for reconstruction of lower-posterior heel defects in eight cases. Postoperatively, subjective assessment and Semmes-Weinstein filament test were performed to evaluate the sensibility of the sural nerve-innerved area.\n\n\nRESULTS\nThe PA ended into the anterior perforating branch and LCA at the level of 6.0 ± 1.4 cm (range 3.3-9.4 cm) above the tip of the lateral malleolus. Both PA and LCA, especially the LCA, gave rise to perforators to contribute to the integument overlying the Achilles tendon. Of eight flaps, six were based on perforator(s) of the LCA and two were on perforators of the PA and LCA. Follow-up lasted for 6-28 months (mean 13.8 months), during which total flap loss and nerve injury were not found. Functional and esthetic outcomes were good in all patients.\n\n\nCONCLUSION\nThe integument overlying the Achilles tendon gets its blood supply through the perforators of the LCA primarily and that of through the PA secondarily. The LCA perforator(s)-based and the LCA plus PA perforators-based stepladder flap is a reliable, sensate flap, and should be thought of as a valuable procedure of choice for coverage of lower-posterior heel defects in selected patients.", "title": "" }, { "docid": "neg:1961515_1", "text": "Public health thrives on high-quality evidence, yet acquiring meaningful data on a population remains a central challenge of public health research and practice. Social monitoring, the analysis of social media and other user-generated web data, has brought advances in the way we leverage population data to understand health. Social media offers advantages over traditional data sources, including real-time data availability, ease of access, and reduced cost. Social media allows us to ask, and answer, questions we never thought possible. This book presents an overview of the progress on uses of social monitoring to study public health over the past decade. We explain available data sources, common methods, and survey research on social monitoring in a wide range of public health areas. Our examples come from topics such as disease surveillance, behavioral medicine, and mental health, among others. We explore the limitations and concerns of these methods. Our survey of this exciting new field of data-driven research lays out future research directions.", "title": "" }, { "docid": "neg:1961515_2", "text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).", "title": "" }, { "docid": "neg:1961515_3", "text": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.", "title": "" }, { "docid": "neg:1961515_4", "text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.", "title": "" }, { "docid": "neg:1961515_5", "text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "neg:1961515_6", "text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.", "title": "" }, { "docid": "neg:1961515_7", "text": "The underrepresentation of women at the top of math-intensive fields is controversial, with competing claims of biological and sociocultural causation. The authors develop a framework to delineate possible causal pathways and evaluate evidence for each. Biological evidence is contradictory and inconclusive. Although cross-cultural and cross-cohort differences suggest a powerful effect of sociocultural context, evidence for specific factors is inconsistent and contradictory. Factors unique to underrepresentation in math-intensive fields include the following: (a) Math-proficient women disproportionately prefer careers in non-math-intensive fields and are more likely to leave math-intensive careers as they advance; (b) more men than women score in the extreme math-proficient range on gatekeeper tests, such as the SAT Mathematics and the Graduate Record Examinations Quantitative Reasoning sections; (c) women with high math competence are disproportionately more likely to have high verbal competence, allowing greater choice of professions; and (d) in some math-intensive fields, women with children are penalized in promotion rates. The evidence indicates that women's preferences, potentially representing both free and constrained choices, constitute the most powerful explanatory factor; a secondary factor is performance on gatekeeper tests, most likely resulting from sociocultural rather than biological causes.", "title": "" }, { "docid": "neg:1961515_8", "text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.", "title": "" }, { "docid": "neg:1961515_9", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "neg:1961515_10", "text": "OBJECTIVE\nTo encourage treatment of depression and prevention of suicide in physicians by calling for a shift in professional attitudes and institutional policies to support physicians seeking help.\n\n\nPARTICIPANTS\nAn American Foundation for Suicide Prevention planning group invited 15 experts on the subject to evaluate the state of knowledge about physician depression and suicide and barriers to treatment. The group assembled for a workshop held October 6-7, 2002, in Philadelphia, Pa.\n\n\nEVIDENCE\nThe planning group worked with each participant on a preworkshop literature review in an assigned area. Abstracts of presentations and key publications were distributed to participants before the workshop. After workshop presentations, participants were assigned to 1 of 2 breakout groups: (1) physicians in their role as patients and (2) medical institutions and professional organizations. The groups identified areas that required further research, barriers to treatment, and recommendations for reform.\n\n\nCONSENSUS PROCESS\nThis consensus statement emerged from a plenary session during which each work group presented its recommendations. The consensus statement was circulated to and approved by all participants.\n\n\nCONCLUSIONS\nThe culture of medicine accords low priority to physician mental health despite evidence of untreated mood disorders and an increased burden of suicide. Barriers to physicians' seeking help are often punitive, including discrimination in medical licensing, hospital privileges, and professional advancement. This consensus statement recommends transforming professional attitudes and changing institutional policies to encourage physicians to seek help. As barriers are removed and physicians confront depression and suicidality in their peers, they are more likely to recognize and treat these conditions in patients, including colleagues and medical students.", "title": "" }, { "docid": "neg:1961515_11", "text": "Intelligent transport systems are the rising technology in the near future to build cooperative vehicular networks in which a variety of different ITS applications are expected to communicate with a variety of different units. Therefore, the demand for highly customized communication channel for each or sets of similar ITS applications is increased. This article explores the capabilities of available wireless communication technologies in order to produce a win-win situation while selecting suitable carrier( s) for a single application or a profile of similar applications. Communication requirements for future ITS applications are described to select the best available communication interface for the target application(s).", "title": "" }, { "docid": "neg:1961515_12", "text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.", "title": "" }, { "docid": "neg:1961515_13", "text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.", "title": "" }, { "docid": "neg:1961515_14", "text": "If, as many psychologists seem to believe, im­ mediate memory represents a distinct system or set of processes from long-term memory (L TM), then what might· it be for? This fundamental, functional question was surprisingly unanswer­ able in the 1970s, given the volume of research that had explored short-term memory (STM), and given the ostensible role that STM was thought to play in cognitive control (Atkinson & Shiffrin, 1971 ). Indeed, failed attempts to link STM to complex cognitive· functions, such as reading comprehension, loomed large in Crow­ der's (1982) obituary for the concept. Baddeley and Hitch ( 197 4) tried to validate immediate memory's functions by testing sub­ jects in reasoning, comprehension, and list­ learning tasks at the same time their memory was occupied by irrelevant material. Generally, small memory loads (i.e., three or fewer items) were retained with virtually no effect on the primary tasks, whereas memory loads of six items consistently impaired reasoning, compre­ hension, and learning. Baddeley and Hitch therefore argued that \"working memory\" (WM)", "title": "" }, { "docid": "neg:1961515_15", "text": "This special issue of the proceedings of the IEEE presents a systematical and complete tutorial on digital television (DTV), produced by a team of DTV experts worldwide. This introductory paper puts the current DTV systems into perspective and explains the historical background and different evolution paths that each system took. The main focus is on terrestrial DTV systems, but satellite and cable DTV are also covered,as well as several other emerging services.", "title": "" }, { "docid": "neg:1961515_16", "text": "This paper proposes a new method for fabric defect classification by incorporating the design of a wavelet frames based feature extractor with the design of a Euclidean distance based classifier. Channel variances at the outputs of the wavelet frame decomposition are used to characterize each nonoverlapping window of the fabric image. A feature extractor using linear transformation matrix is further employed to extract the classification-oriented features. With a Euclidean distance based classifier, each nonoverlapping window of the fabric image is then assigned to its corresponding category. Minimization of the classification error is achieved by incorporating the design of the feature extractor with the design of the classifier based on minimum classification error (MCE) training method. The proposed method has been evaluated on the classification of 329 defect samples containing nine classes of fabric defects, and 328 nondefect samples, where 93.1% classification accuracy has been achieved.", "title": "" }, { "docid": "neg:1961515_17", "text": "Data mining is the extraction of useful, prognostic, interesting, and unknown information from massive transaction databases and other repositories. Data mining tools predict potential trends and actions, allowing various fields to make proactive, knowledge-driven decisions. Recently, with the rapid growth of information technology, the amount of data has exponentially increased in various fields. Big data mostly comes from people’s day-to-day activities and Internet-based companies. Mining frequent itemsets and association rule mining (ARM) are well-analysed techniques for revealing attractive correlations among variables in huge datasets. The Apriori algorithm is one of the most broadly used algorithms in ARM, and it collects the itemsets that frequently occur in order to discover association rules in massive datasets. The original Apriori algorithm is for sequential (single node or computer) environments. This Apriori algorithm has many drawbacks for processing huge datasets, such as that a single machine’s memory, CPU and storage capacity are insufficient. Parallel and distributed computing is the better solution to overcome the above problems. Many researchers have parallelized the Apriori algorithm. This study performs a survey on several well-enhanced and revised techniques for the parallel Apriori algorithm in the HadoopMapReduce environment. The Hadoop-MapReduce framework is a programming model that efficiently and effectively processes enormous databases in parallel. It can handle large clusters of commodity hardware in a reliable and fault-tolerant manner. This survey will provide an overall view of the parallel Apriori algorithm implementation in the Hadoop-MapReduce environment and briefly discuss the challenges and open issues of big data in the cloud and Hadoop-MapReduce. Moreover, this survey will not only give overall existing improved Apriori algorithm methods on Hadoop-MapReduce but also provide future research direction for upcoming researchers.", "title": "" }, { "docid": "neg:1961515_18", "text": "Material recognition is an important subtask in computer vision. In this paper, we aim for the identification of material categories from a single image captured under unknown illumination and view conditions. Therefore, we use several features which cover various aspects of material appearance and perform supervised classification using Support Vector Machines. We demonstrate the feasibility of our approach by testing on the challenging Flickr Material Database. Based on this dataset, we also carry out a comparison to a previously published work [Liu et al., ”Exploring Features in a Bayesian Framework for Material Recognition”, CVPR 2010] which uses Bayesian inference and reaches a recognition rate of 44.6% on this dataset and represents the current state-of the-art. With our SVM approach we obtain 53.1% and hence, significantly outperform this approach.", "title": "" }, { "docid": "neg:1961515_19", "text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.", "title": "" } ]
1961516
Overview: Generalizations of Multi-Agent Path Finding to Real-World Scenarios
[ { "docid": "pos:1961516_0", "text": "Multi-Agent Path Finding (MAPF) is well studied in both AI and robotics. Given a discretized environment and agents with assigned start and goal locations, MAPF solvers from AI find collision-free paths for hundreds of agents with userprovided sub-optimality guarantees. However, they ignore that actual robots are subject to kinematic constraints (such as finite maximum velocity limits) and suffer from imperfect plan-execution capabilities. We therefore introduce MAPFPOST, a novel approach that makes use of a simple temporal network to postprocess the output of a MAPF solver in polynomial time to create a plan-execution schedule that can be executed on robots. This schedule works on non-holonomic robots, takes their maximum translational and rotational velocities into account, provides a guaranteed safety distance between them, and exploits slack to absorb imperfect plan executions and avoid time-intensive replanning in many cases. We evaluate MAPF-POST in simulation and on differentialdrive robots, showcasing the practicality of our approach.", "title": "" } ]
[ { "docid": "neg:1961516_0", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "neg:1961516_1", "text": "Studying a software project by mining data from a single repository has been a very active research field in software engineering during the last years. However, few efforts have been devoted to perform studies by integrating data from various repositories, with different kinds of information, which would, for instance, track the different activities of developers. One of the main problems of these multi-repository studies is the different identities that developers use when they interact with different tools in different contexts. This makes them appear as different entities when data is mined from different repositories (and in some cases, even from a single one). In this paper we propose an approach, based on the application of heuristics, to identify the many identities of developers in such cases, and a data structure for allowing both the anonymized distribution of information, and the tracking of identities for verification purposes. The methodology will be presented in general, and applied to the GNOME project as a case example. Privacy issues and partial merging with new data sources will also be considered and discussed.", "title": "" }, { "docid": "neg:1961516_2", "text": "A reversible gate has the equal number of inputs and outputs and one-to-one mappings between input vectors and output vectors; so that, the input vector states can be always uniquely reconstructed from the output vector states. This correspondence introduces a reversible full-adder circuit that requires only three reversible gates and produces least number of \"garbage outputs \", that is two. After that, a theorem has been proposed that proves the optimality of the propounded circuit in terms of number of garbage outputs. An efficient algorithm is also introduced in this paper that leads to construct a reversible circuit.", "title": "" }, { "docid": "neg:1961516_3", "text": "Cite this article Romager JA, Hughes K, Trimble JE. Personality traits as predictors of leadership style preferences: Investigating the relationship between social dominance orientation and attitudes towards authentic leaders. Soc Behav Res Pract Open J. 2017; 3(1): 1-9. doi: 10.17140/SBRPOJ-3-110 Personality Traits as Predictors of Leadership Style Preferences: Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders Original Research", "title": "" }, { "docid": "neg:1961516_4", "text": "Dynamic magnetic resonance imaging (MRI) scans can be accelerated by utilizing compressed sensing (CS) reconstruction methods that allow for diagnostic quality images to be generated from undersampled data. Unfortunately, CS reconstruction is time-consuming, requiring hours between a dynamic MRI scan and image availability for diagnosis. In this work, we train a convolutional neural network (CNN) to perform fast reconstruction of severely undersampled dynamic cardiac MRI data, and we explore the utility of CNNs for further accelerating dynamic MRI scan times. Compared to state-of-the-art CS reconstruction techniques, our CNN achieves reconstruction speeds that are 150x faster without significant loss of image quality. Additionally, preliminary results suggest that CNNs may allow scan times that are 2x faster than those allowed by CS.", "title": "" }, { "docid": "neg:1961516_5", "text": "The local-dimming backlight has recently been presented for use in LCD TVs. However, the image resolution is low, particularly at weak edges. In this work, a local-dimming backlight is developed to improve the image contrast and reduce power dissipation. The algorithm enhances low-level edge information to improve the perceived image resolution. Based on the algorithm, a 42-in backlight module with white light-emitting diode (LED) devices was driven by a local dimming control core. The block-wise register approach substantially reduced the number of required line-buffers and shortened the latency time. The measurements made in the laboratory indicate that the backlight system reduces power dissipation by an average of 48% and exhibits no visible distortion compared relative to the fixed backlighting system. The system was successfully demonstrated in a 42-in LCD TV, and the contrast ratio was greatly improved by a factor of 100.", "title": "" }, { "docid": "neg:1961516_6", "text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.", "title": "" }, { "docid": "neg:1961516_7", "text": "This literature review focuses on aesthetics of interaction design with further goal of outlining a study towards prediction model of aesthetic value. The review covers three main issues, tightly related to aesthetics of interaction design: evaluation of aesthetics, relations between aesthetics and interaction qualities and implementation of aesthetics in interaction design. Analysis of previous models is carried out according to definition of interaction aesthetics: holistic approach to aesthetic perception considering its' action- and appearance-related components. As a result the empirical study is proposed for investigating the relations between attributes of interaction and users' aesthetic experience.", "title": "" }, { "docid": "neg:1961516_8", "text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.", "title": "" }, { "docid": "neg:1961516_9", "text": "Steganography and steganalysis received a great deal of attention from media and law enforcement. Many powerful and robust methods of steganography and steganalysis have been developed. In this paper we are considering the methods of steganalysis that are to be used for this processes. Paper giving some idea about the steganalysis and its method. Keywords— Include at least 5 keywords or phrases", "title": "" }, { "docid": "neg:1961516_10", "text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>", "title": "" }, { "docid": "neg:1961516_11", "text": "The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.", "title": "" }, { "docid": "neg:1961516_12", "text": "The barrier function of the intestine is essential for maintaining the normal homeostasis of the gut and mucosal immune system. Abnormalities in intestinal barrier function expressed by increased intestinal permeability have long been observed in various gastrointestinal disorders such as Crohn's disease (CD), ulcerative colitis (UC), celiac disease, and irritable bowel syndrome (IBS). Imbalance of metabolizing junction proteins and mucosal inflammation contributes to intestinal hyperpermeability. Emerging studies exploring in vitro and in vivo model system demonstrate that Rho-associated coiled-coil containing protein kinase- (ROCK-) and myosin light chain kinase- (MLCK-) mediated pathways are involved in the regulation of intestinal permeability. With this perspective, we aim to summarize the current state of knowledge regarding the role of inflammation and ROCK-/MLCK-mediated pathways leading to intestinal hyperpermeability in gastrointestinal disorders. In the near future, it may be possible to specifically target these specific pathways to develop novel therapies for gastrointestinal disorders associated with increased gut permeability.", "title": "" }, { "docid": "neg:1961516_13", "text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.", "title": "" }, { "docid": "neg:1961516_14", "text": "In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.", "title": "" }, { "docid": "neg:1961516_15", "text": "Correspondence Lars Ruthotto, Department of Mathematics and Computer Science, Emory University, 400 Dowman Dr, Atlanta, GA 30322, USA. Email: [email protected] Summary Image registration is a central problem in a variety of areas involving imaging techniques and is known to be challenging and ill-posed. Regularization functionals based on hyperelasticity provide a powerful mechanism for limiting the ill-posedness. A key feature of hyperelastic image registration approaches is their ability to model large deformations while guaranteeing their invertibility, which is crucial in many applications. To ensure that numerical solutions satisfy this requirement, we discretize the variational problem using piecewise linear finite elements, and then solve the discrete optimization problem using the Gauss–Newton method. In this work, we focus on computational challenges arising in approximately solving the Hessian system. We show that the Hessian is a discretization of a strongly coupled system of partial differential equations whose coefficients can be severely inhomogeneous. Motivated by a local Fourier analysis, we stabilize the system by thresholding the coefficients. We propose a Galerkin-multigrid scheme with a collective pointwise smoother. We demonstrate the accuracy and effectiveness of the proposed scheme, first on a two-dimensional problem of a moderate size and then on a large-scale real-world application with almost 9 million degrees of freedom.", "title": "" }, { "docid": "neg:1961516_16", "text": "Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region.", "title": "" }, { "docid": "neg:1961516_17", "text": "The detection of symmetry axes through the optimization of a given symmetry measure, computed as a function of the mean-square error between the original and reflected images, is investigated in this paper. A genetic algorithm and an optimization scheme derived from the self-organizing maps theory are presented. The notion of symmetry map is then introduced. This transform allows us to map an object into a symmetry space where its symmetry properties can be analyzed. The locations of the different axes that globally and locally maximize the symmetry value can be obtained. The input data are assumed to be vector-valued, which allow to focus on either shape. color or texture information. Finally, the application to skin cancer diagnosis is illustrated and discussed.", "title": "" }, { "docid": "neg:1961516_18", "text": "Whenever a document containing sensitive information needs to be made public, privacy-preserving measures should be implemented. Document sanitization aims at detecting sensitive pieces of information in text, which are removed or hidden prior publication. Even though methods detecting sensitive structured information like e-mails, dates or social security numbers, or domain specific data like disease names have been developed, the sanitization of raw textual data has been scarcely addressed. In this paper, we present a general-purpose method to automatically detect sensitive information from textual documents in a domain-independent way. Relying on the Information Theory and a corpus as large as the Web, it assess the degree of sensitiveness of terms according to the amount of information they provide. Preliminary results show that our method significantly improves the detection recall in comparison with approaches based on trained classifiers.", "title": "" } ]
1961517
Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments
[ { "docid": "pos:1961517_0", "text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches", "title": "" } ]
[ { "docid": "neg:1961517_0", "text": "In recent years, multiple-line acquisition (MLA) has been introduced to increase frame rate in cardiac ultrasound medical imaging. However, this method induces blocklike artifacts in the image. One approach suggested, synthetic transmit beamforming (STB), involves overlapping transmit beams which are then interpolated to remove the MLA blocking artifacts. Independently, the application of minimum variance (MV) beamforming has been suggested in the context of MLA. We demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. This is demonstrated by using both simulated and real phantom data, as well as cardiac data. We also show that the STB-compensated MV beamfomer outperforms single-line acquisition (SLA) delay- and-sum in terms of lateral resolution.", "title": "" }, { "docid": "neg:1961517_1", "text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.", "title": "" }, { "docid": "neg:1961517_2", "text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.", "title": "" }, { "docid": "neg:1961517_3", "text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.", "title": "" }, { "docid": "neg:1961517_4", "text": "We compiled details of over 8000 assessments of protected area management effectiveness across the world and developed a method for analyzing results across diverse assessment methodologies and indicators. Data was compiled and analyzed for over 4000 of these sites. Management of these protected areas varied from weak to effective, with about 40% showing major deficiencies. About 14% of the surveyed areas showed significant deficiencies across many management effectiveness indicators and hence lacked basic requirements to operate effectively. Strongest management factors recorded on average related to establishment of protected areas (legal establishment, design, legislation and boundary marking) and to effectiveness of governance; while the weakest aspects of management included community benefit programs, resourcing (funding reliability and adequacy, staff numbers and facility and equipment maintenance) and management effectiveness evaluation. Estimations of management outcomes, including both environmental values conservation and impact on communities, were positive. We conclude that in spite of inadequate funding and management process, there are indications that protected areas are contributing to biodiversity conservation and community well-being.", "title": "" }, { "docid": "neg:1961517_5", "text": "The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and linds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized O(log3 n log k)-approximation algorithm for the group Steiner tree problem on an n-node graph, where k is the number of groups. The best previous performance guarantee was (1 + ?)a (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Bavi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slavik on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to O(log’ nlog k) in the case of graphs that exclude small minors by using a better alternative to Bartal’s result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case. -", "title": "" }, { "docid": "neg:1961517_6", "text": "This paper presents a novel mobility metric for mobile ad hoc networks (MANET) that is based on the ratio between the received power levels of successive transmissions measured at any node from all its neighboring nodes. This mobility metric is subsequently used as a basis for cluster formation which can be used for improving the scalability of services such as routing in such networks. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the Lowest-ID clustering algorithm ( “least clusterhead change” [3]) which is a well known clustering algorithms for MANETs. We show reduction of as much as 33% in the number of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, the network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that since using MOBIC results in a more stable configuration, it will directly lead to improvement of performance.", "title": "" }, { "docid": "neg:1961517_7", "text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.", "title": "" }, { "docid": "neg:1961517_8", "text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.", "title": "" }, { "docid": "neg:1961517_9", "text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.", "title": "" }, { "docid": "neg:1961517_10", "text": "The concept of “truth,” as a public good is the production of a collective understanding, which emerges from a complex network of social interactions. The recent impact of social networks on shaping the perception of truth in political arena shows how such perception is corroborated and established by the online users, collectively. However, investigative journalism for discovering truth is a costly option, given the vast spectrum of online information. In some cases, both journalist and online users choose not to investigate the authenticity of the news they receive, because they assume other actors of the network had carried the cost of validation. Therefore, the new phenomenon of “fake news” has emerged within the context of social networks. The online social networks, similarly to System of Systems, cause emergent properties, which makes authentication processes difficult, given availability of multiple sources. In this study, we show how this conflict can be modeled as a volunteer's dilemma. We also show how the public contribution through news subscription (shared rewards) can impact the dominance of truth over fake news in the network.", "title": "" }, { "docid": "neg:1961517_11", "text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.", "title": "" }, { "docid": "neg:1961517_12", "text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.", "title": "" }, { "docid": "neg:1961517_13", "text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.", "title": "" }, { "docid": "neg:1961517_14", "text": "This paper provides an overview of CMOS-based sensor technology with specific attention placed on devices made through micromachining of CMOS substrates and thin films. Microstructures may be formed using either pre-CMOS, intra-CMOS and post-CMOS fabrication approaches. To illustrate and motivate monolithic integration, a handful of microsystem examples, including inertial sensors, gravimetric chemical sensors, microphones, and a bone implantable sensor will be highlighted. Design constraints and challenges for CMOS-MEMS devices will be covered", "title": "" }, { "docid": "neg:1961517_15", "text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism", "title": "" }, { "docid": "neg:1961517_16", "text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.", "title": "" }, { "docid": "neg:1961517_17", "text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).", "title": "" }, { "docid": "neg:1961517_18", "text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.", "title": "" }, { "docid": "neg:1961517_19", "text": "Light detection and ranging (lidar) is becoming an increasingly popular technology among scientists for the development of predictive models of forest biophysical variables. However, before this technology can be adopted with confidence for long-term monitoring applications in Canada, robust models must be developed that can be applied and validated over large and complex forested areas. This will require “scaling-up” from current models developed from high-density lidar data to low-density data collected at higher altitudes. This paper investigates the effect of lowering the average point spacing of discrete lidar returns on models of forest biophysical variables. Validation of results revealed that high-density models are well correlated with mean dominant height, basal area, crown closure, and average aboveground biomass (R2 = 0.84, 0.89, 0.60, and 0.91, respectively). Low-density models could not accurately predict crown closure (R2 = 0.36). However, they did provide slightly improved estimates for mean dominant height, basal area, and average aboveground biomass (R2 = 0.90, 0.91, and 0.92, respectively). Maps were generated and validated for the entire study area from the low-density models. The ability of low-density models to accurately map key biophysical variables is a positive indicator for the utility of lidar data for monitoring large forested areas. Résumé : Le lidar est en voie de devenir une technique de plus en plus populaire parmi les chercheurs pour le développement de modèles de prédiction des variables biophysiques de la forêt. Cependant, avant que cette technologie puisse être adoptée avec confiance pour le suivi à long terme au Canada, des modèles robustes pouvant être appliqués et validés pour des superficies de forêt vastes et complexes doivent être développés. Cela va exiger de passer des modèles courants développés à partir d’une forte densité de données lidar à une plus faible densité de données collectées à plus haute altitude. Cet article se penche sur l’effet de la diminution de l’espacement ponctuel moyen des échos individuels du lidar sur les modèles de variables biophysiques de la forêt. La validation des résultats a montré que les modèles à forte densité sont bien corrélés avec la hauteur dominante moyenne, la surface terrière, la fermeture du couvert et la biomasse aérienne moyenne (R2 = 0,84, 0,89, 0,60 et 0,91 respectivement). Les modèles à faible densité ne pouvaient pas correctement (R2 = 0,36) prédire la fermeture du couvert. Cependant, ils ont fourni des estimations légèrement meilleures pour la hauteur dominante moyenne, la surface terrière et la biomasse aérienne moyenne (R2 = 0,90, 0,91 et 0,92 respectivement). Des cartes ont été générées et validées pour toute la zone d’étude à partir de modèles à faible densité. La capacité des modèles à faible densité à cartographier correctement les variables biophysiques importantes est une indication positive de l’utilité des données lidar pour le suivi de vastes zones de forêt. [Traduit par la Rédaction] Thomas et al. 47", "title": "" } ]