id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_400
For example, if two discourse connectives are SYNONYMS then we would expect them to have similar distributions.
if two connectives are EXCLUSIVE, then we would expect them to have dissimilar distributions.
contrasting
train_401
A distributional similarity function provides only a one-dimensional comparison of two distributions, namely how similar they are.
we can obtain an additional perspective by using a variancebased function.
contrasting
train_402
The key features of this approach are: Minimal human decision making.
to the standard approach, our method obviates, to a large extent, the need to make tough or potentially suboptimal design decisions.
contrasting
train_403
With a beam width of 1000 the parser returns something like a 50-best list (Collins, personal communication), but the actual number of parses returned for each sentences varies.
turning off dynamic programming results in a loss in efficiency.
contrasting
train_404
Dynamic programming parsing algorithms for PCFGs require O(m 2 ) dynamic programming states, where m is the length of the sentence, so an n-best parsing algorithm requires O(nm 2 ).
things get much worse when the grammar is bilexicalized.
contrasting
train_405
To solve the problem efficiently, we now adopt a variant of the branch-and-bound algorithm, similar to that described in (Kudo and Matsumoto, 2004) Abe and Zaki independently proposed an efficient method, rightmost-extension, for enumerating all subtrees from a given tree (Abe et al., 2002;Zaki, 2002).
first, the algorithm starts with a set of trees consisting of single nodes, and then expands a given tree of size (n−1) by attaching a new node to it to obtain trees of size n. it would be inefficient to expand nodes at arbitrary positions of the tree, as duplicated enumeration is inevitable.
contrasting
train_406
The implicit calculation yields a practical computation in training.
in testing, kernel methods require a number of kernel evaluations, which are too heavy to allow us to realize real applications.
contrasting
train_407
select relevant subtrees and achieves the best results for WSJ parsing.
these techniques are not based on the regularization framework focused on this paper and do not always eliminate all the redundant subtrees.
contrasting
train_408
For these purposes, and because of its higher content validity, IPSyn scores often tells us more than MLU scores.
the MLU holds the advantage of being far easier to compute.
contrasting
train_409
If we were looking for a fronted subordinate clause, only (c) would be a match.
each one of the sentences has an identical part-speechsequence.
contrasting
train_410
Something like this approach seems to be adopted in the literature on communication between autonomous software agents.
even though many situations considered in multiagent systems do involve more than two agents, most interaction protocols are designed only for two participants at a time.
contrasting
train_411
On the other hand, the contextual updates it enforces will not enable it to deal with the following (constructed) variant on 4, in other words does not afford responders to comment on previous responders, as opposed to the original querier: One arguable problem with this protocol-equally applicable to the corresponding DRed grounding protocol-is that it licences long distance acceptance and is, thus, inconsistent with the MAG benchmark.
it is potentially useful for interactions where there is explicitly more than one direct addressee.
contrasting
train_412
S is proposing activity α L is considering proposal α Intention S is signalling that p L is recognizing that p Signal S is presenting signal σ L is identifying signal σ Channel S is executing behavior β L is attending to behavior β Table 1: Four levels of grounding Schlangen, 2004).
only the work of Purver (2004) addresses the question of how the source of the error affects the form the CR takes.
contrasting
train_413
For example, common adjective-noun alternations are memorized.
since this linguistic information is not encoded in the model, unseen adjective noun pairs may still be handled incorrectly.
contrasting
train_414
A simpler alternate approach would be to compare bags-ofwords.
since our possible orderings are bound by the induced tree structure, we might overzealously prune a candidate with a different tree structure that allows a better target order.
contrasting
train_415
One might assume a smooth transition from text-based summarization to email and chat-based summarizations.
chat falls in the genre of correspondence, which requires dialogue and conversation analysis.
contrasting
train_416
The conventional wisdom since Magerman (1995) has been that lexicalization substantially improves performance compared to an unlexicalized baseline model (e.g., a probabilistic context-free grammar, PCFG).
this has been challenged by Klein and Manning (2003), who demonstrate that an unlexicalized model can achieve a performance close to the state of the art for lexicalized models.
contrasting
train_417
For example, (5) would be contracted to (4).
this approach only works if we are certain that the model is tagging the right words as compounds.
contrasting
train_418
The impact of perfect tagging is less drastic on the lexicalized models (around 1% increase).
our main finding, viz., that lexicalized models outperform unlexicalized models considerable on the FTB, remains valid, even with perfect tagging.
contrasting
train_419
Whenever the subject appears after the verb, the non-standard position may be annotated using a long-distance dependency (LDD).
as mentioned above, this information can also be retrieved from the grammatical function of the respective noun phrases: the GFs of the two NPs above would be 'subject' and 'accusative object' regardless of their position in the sentence.
contrasting
train_420
NP case German articles and pronouns are strongly marked for case.
the grammatical function of all articles is usually NK, meaning noun kernel.
contrasting
train_421
This is in part for the sake of simplicity: unlexicalized grammars are interesting because they are simple to estimate and parse, and adding smoothing makes both estimation and parsing nearly as complex as with fully lexicalized models.
because lexicalization adds little to the performance of German parsing models, it is therefore interesting to investigate the impact of smoothing on unlexicalized parsing models for German.
contrasting
train_422
As both the Witten-Bell and Kneser-Ney variants are fairly well known, we do not describe them further.
as Brants' approach (to our knowledge) has not been used elsewhere, and because it needs to be modified for our purposes, we show the version of the algorithm we use in Figure 1.
contrasting
train_423
The simplest kind of annotation is positional in nature, such as the association of a part-of-speech tag with each corpus position.
structural annotation such as that used in syntactic treebanks (e.g., Marcus et al., 1993) assigns a syntactic category to a contiguous sequence of corpus positions.
contrasting
train_424
The chances of one of the middle 32 elements matching something in the internal context of the VP is relatively high, and indeed the twenty-sixth word is ein.
if we move stepwise out from the nucleus in order to try to match was ein Seeufer werden, the only options are to find ein directly to the right of was or Seeufer directly to the left of werden, neither of which occurs, thus stopping the search.
contrasting
train_425
With s i , s i−1 we can further reduce the error rate to 20.26%.
when we add a third dependency, the error rate worsens to 29.32%, which indicates a number of parameters too high for the given amount of training data.
contrasting
train_426
Clearly, Max-Ent is a rather different type of model from Stochastic OT, not only in the use of constraint ordering, but also in the objective function (conditional likelihood rather than likelihood/posterior).
it may be of interest to compare these two types of models.
contrasting
train_427
But this may not be easy, due to the difficulty in likelihood evaluation (including likelihood ratio) discussed in Section 2.
our algorithm provides a general solution to the problem of learning Stochastic OT grammars.
contrasting
train_428
Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains.
one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions.
contrasting
train_429
General, unconstrained induction of HMMs using the EM algorithm fails to detect useful field structure in either domain.
we demonstrate that small amounts of prior knowledge can be used to greatly improve the learned model.
contrasting
train_430
This is the primary way supervised methods work, with the loss function relativized to training label patterns.
for unsupervised learning, the primary candidate for an objective function is the data likelihood, and we don't have another suggestion here.
contrasting
train_431
The measure in Equation 1 is similar to the cosine metric, commonly used to determine the similarity of documents in the vector space model approach to Information Retrieval.
the cosine metric will not perform well for our application since it does not take into account the similarity between elements of a vector and would assign equal similarity to each pair of patterns in the example shown in Figure 1.
contrasting
train_432
It is often simply an unstated assumption that any full translation system, to achieve full performance, will sooner or later have to incorporate individual WSD components.
in some translation architectures and particularly in statistical machine translation (SMT), the translation engine already implicitly factors in many contextual features into lexical choice.
contrasting
train_433
This would give an upper-bound on performance and would quantify the effect of WSD errors.
we do not have a corpus which contains both sense annotation and multiple reference translations: the MT evaluation corpus is not annotated with the correct senses of Senseval target words, and the Senseval corpus does not include English translations of the sentences.
contrasting
train_434
In these cases, some of the reference translations also use "impact".
even when the WSD model constrains the decoder to select "impact" rather than "shock", the resulting sentence translation yields a lower BLEU score.
contrasting
train_435
(2004), or alternatively they could be incorporated as features for reranking in a maximum-entropy SMT model (Och and Ney, 2002), instead of using them to constrain the sentence translation hypotheses as done here.
the preceding discussion argues that it is doubtful that this would produce significantly different results, since the inherent problem from the "language model effect" would largely remain, causing sentence translations that include the WSD's preferred lexical choices to be discounted.
contrasting
train_436
However this is clearly unfeasible for all-words WSD tasks, in which all the words of an open text should be disambiguated.
the word expert approach works very well for lexical sample WSD tasks (i.e.
contrasting
train_437
Regarding the second direction, external knowledge would be required to help WSD algorithms to better generalize over the data available for training.
most of the state-of-the-art supervised approaches to WSD are still completely based on "internal" information only (i.e.
contrasting
train_438
Algorithms based solely on deeper representations inevitably suffer from the errors in computing these representations.
low level processing such as tokenization will be more accurate, and may also contain useful information missed by deep processing of text.
contrasting
train_439
(2000) used statistical parsing models to extract relational facts from text, which avoided pipeline processing of data.
their results are essentially based on the output of sentence parsing, which is a deep processing of text.
contrasting
train_440
Using the notation just defined, we can write the two surface kernels as follows: 1) Argument kernel Intuitively the dependency path connecting two arguments could provide a high level of syntactic regularization.
a complete match of two dependency paths is rare.
contrasting
train_441
While short-distance relations dominate and can be resolved by above simple features, the dependency tree and parse tree features can only take effect in the remaining much less long-distance relations.
full parsing is always prone to long distance errors although the Collins' parser used in our system represents the state-of-the-art in full parsing.
contrasting
train_442
We have thus removed the queries which contained OOV words -resulting in a set of 96 queries -which clearly biases the evaluation.
the results on both the 1-best and the lattice indexes are equally favored by this.
contrasting
train_443
The prosody model is also affected, since the alignment of incorrect words to the speech is imperfect, thereby degrading prosodic feature extraction.
the prosody model is more robust to recognition errors than textual knowledge, because of its lesser dependence on word identity.
contrasting
train_444
For log-linear models, POS information and an additional dictionary are used, which is not the case for GIZA++/IBM models.
treated as a method for performing symmetrization, log-linear combination alone yields better results than intersection, union, and refined methods.
contrasting
train_445
1 A cept is defined as the set of target words connected to a source word (Brown et al., 1993).
both model 3 and model 4 do not take the multiword cept into account.
contrasting
train_446
From the results, it can be seen that the larger the size of in-domain corpus is, the smaller the alignment error rate is.
when the number of the sentence pairs increase from 3030 to 5046, the error rate reduction in Table 4 is very small.
contrasting
train_447
From the results in Table 6, it can be seen that the larger the size of out-of-domain corpus is, the smaller the alignment error rate is.
when the number of the sentence pairs is more than 130,000, the error rate reduction is very small.
contrasting
train_448
As shown by the numbers in Table 1, the full lexicalized model produced promising alignment results on sentence pairs that have no more than 15 words on both sides.
due to its prohibitive O(n 8 ) computational complexity, our C++ implementation of the unpruned lexicalized model took more than 500 CPU hours, which were distributed over multiple machines, to finish one iteration of training.
contrasting
train_449
To verify the safety of the tic-tac-toe pruning technique, we applied it to the unlexicalized ITG using the same beam ratio (10 −5 ) and found that the AER on the test data was not changed.
whether or not the top-k lexical head pruning technique is equally safe remains a question.
contrasting
train_450
We construct a background model which is a basic unigram language model, P (A 2 ) = a∈A 2 P (a).
we then pick targets chosen by the confidence estimate this confidence estimate does not workwell in our dataset.
contrasting
train_451
The experimental setup used in the fusion experiments was the same as before: training on 15 people, and testing on 137 people.
the postfusion evaluation differs from the pre-fusion evaluation.
contrasting
train_452
Both kinds of models have been developed for tagging entities such as people, places and organizations in news material.
the rapid development of bioinformatics has recently generated interest on the extraction of biological entities such as genes (Collier et al., 2000) and genomic variations (McDonald et al., 2004b) from biomedical literature.
contrasting
train_453
For the sentence "John Smith is the CEO at Inc. Corp.", the system would ideally extract the tuple (John Smith, CEO, Inc. Corp.).
for the sentence "Everyday John Smith goes to his office at Inc. Corp.", the system would extract (John Smith, ⊥, Inc. Corp.), since there is no mention of a job title.
contrasting
train_454
If this instance is marked as negative, then the model might incorrectly disfavor features that correlate John to Inc. Corp..
if this instance is labeled positive, then the model may tend to prefer the shorter and more compact incomplete relations since they will be abundant in the positive training examples.
contrasting
train_455
In the simplest version, these are simply treated like other constituents in the parse tree.
these can disrupt what may be termed the intended sequence of syntactic categories in the utterance, so we also tried skipping these constituents when mapping from the parse tree to shallow parse sequences.
contrasting
train_456
These cues include phonology, prosody, morphology, and syntax in the context of an utterance.
global phonotactic cues at the level of utterance or spoken document remains unexplored in previous work.
contrasting
train_457
Almost any tree-transduction operations defined on a single node will fail to generate the target sentence from the source sentence without using insertion/deletion operations.
if we view each dependency tree as an assembly of indivisible sub-sentential elementary trees (ETs), we can find a proper way to transduce the input tree to the output tree.
contrasting
train_458
Such broken and crossing dependencies can be modeled by SDIG if they appear inside a pair of elementary trees.
if they appear between the elementary trees, they are not compatible with the isomorphism assumption on which SDIG is based.
contrasting
train_459
It may be because the translation ambiguities of the chunk-based models are lower than those of the word-based models.
the processing speed of the IBM-style models is faster than the proposed model.
contrasting
train_460
The use of bilingual verbnoun collocations is also useful for improving the performance.
we still have some problems of the data sparseness and the low coverage of bilingual verbnoun collocation.
contrasting
train_461
The method works by examining one sample at a time, and makes an update , then the perceptron method can find such a separator.
it is not entirely clear what this method does when the training data are not completely separable.
contrasting
train_462
Similarly, output like S2, despite its grammatical correctness, is choppy and too tedious to read.
our instance-based sentence boundary determination module will use examples in a corpus to partition those attributes into several sentences in a more balanced manner (S3).
contrasting
train_463
Recall that the Yamcha classifiers are trained on TR1; in addition, Rip is trained on the output of these Yamcha clas- The difference in performance between TE1 and TE2 shows the difference between the ATB1 and ATB2 (different source of news, and also small differences in annotation).
the results for Rip show that retraining the Rip classifier on a new corpus can improve the results, without the need for retraining all ten Yamcha classifiers (which takes considerable time).
contrasting
train_464
We intend to apply our approach to Arabic dialects, for which currently no annotated corpora exist, and for which very few written corpora of any kind exist (making the dialects bad candidates even for unsupervised learning).
there is a fair amount of descriptive work on dialectal morphology, so that dialectal morphological analyzers may be easier to come by than dialect corpora.
contrasting
train_465
Classification performance using Charniak parses is about 3% absolute worse than when using Tree-Bank parses.
argument identification performance using Charniak parses is about 12.7% absolute worse.
contrasting
train_466
Since we used the whole of the German-English section of the Europarl corpus, we could not try improving the alignments by simply adding more German-English training data.
there is nothing that limits our paraphrase extraction method to drawing on candidate paraphrases from a single target language.
contrasting
train_467
As we generate more number of random vectors, we can estimate the cosine similarity between two vectors more accurately.
in practice, the number (d) of random vectors required is highly domain dependent, i.e., it depends on the value of the total number of vectors (n), features (k) and the way the vectors are distributed.
contrasting
train_468
Overall, the algorithm takes O(nk + nlogn) time.
for noun clustering, we generally have the number of nouns, n, smaller than the number of features, k.
contrasting
train_469
For example, generating 10 random vectors gives us a cosine error of 0.4432 (which is a large number since cosine similarity ranges from 0 to 1.)
generation of more random vectors leads to reduction in error rate as seen by the values for 1000 (0.0493) and 10000 (0.0156).
contrasting
train_470
We could not calculate the total time taken to build noun similarity list using the traditional technique on the entire corpus.
we estimate that its time taken would be at least 50,000 hours (and perhaps even more) with a few of Terabytes of disk space needed.
contrasting
train_471
In recent years, researchers have proposed several algorithms to generate word alignments.
evaluating word alignments is difficult because even humans have difficulty performing this task.
contrasting
train_472
Other metrics assess the impact of alignments externally, e.g., different alignments are tested by comparing the corresponding MT outputs using automated evaluation metrics (e.g., BLEU (Papineni et al., 2002) or METEOR (Banerjee and Lavie, 2005)).
these studies showed that AER and BLEU do not correlate well (Callison-Burch et al., 2004;Goutte et al., 2004;Ittycheriah and Roukos, 2005).
contrasting
train_473
Similarly, if there is a missing link, only the recall is reduced slightly.
when computing CPER, an incorrect or missing alignment link might result in more than one phrase pair being eliminated from or added to the set of phrases.
contrasting
train_474
While we expect some correlation between these two types of segmentation, they are clearly different problems.
one comparable study is described in (Galley et al., 2003).
contrasting
train_475
Note that they improved their accuracy by combining the unsupervised output with discourse features in a supervised classifier -while we do not attempt a similar comparison here, we expect a similar technique would yield similar segmentation improvements.
we take a generative approach, modelling the text as being generated by a sequence of mixtures of underlying topics.
contrasting
train_476
The focus of our work, however, is on an orthogonal yet fundamental aspect of this analysis -the impact of long-range cohesion dependencies on segmentation performance.
to previous approaches, the homogeneity of a segment is determined not only by the similarity of its words, but also by their relation to words in other segments of the text.
contrasting
train_477
We aim to minimize the cut, which is defined to be the sum of the crossing edges between the two sets of nodes.
in other words, we want to split the sentences into two maximally dissimilar classes by choosing A and B to minimize: we need to ensure that the two partitions are not only maximally different from each other, but also that they are themselves homogeneous by accounting for intra-partition node similarity.
contrasting
train_478
A k form a partition of the graph, and V −A k is the set difference between the entire graph and partition k. Decoding Papadimitriou proved that the problem of minimizing normalized cuts on graphs is N P -complete (Shi and Malik, 2000).
in our case, the multi-way cut is constrained to preserve the linearity of the segmentation.
contrasting
train_479
In the formulation above we use sentences as our nodes.
we can also represent graph nodes with non-overlapping blocks of words of fixed length.
contrasting
train_480
The noun-pronoun path coreference can be used directly as a feature in a pronoun resolution system.
path coreference is undefined for cases where there is no path between the pro-noun and the candidate noun -for example, when the candidate is in the previous sentence.
contrasting
train_481
These features are calculated by mining the parse trees, and then could be used for resolution by using manually designed rules (Lappin and Leass, 1994;Kennedy and Boguraev, 1996;Mitkov, 1998), or using machine-learning methods (Aone and Bennett, 1995;Yang et al., 2004;Luo and Zitouni, 2005).
such a solution has its limitation.
contrasting
train_482
Normally, parsing is done on the sentence level.
in many cases a pronoun and an antecedent candidate do not occur in the same sentence.
contrasting
train_483
This should be because feature Full-Expansion captures a larger portion of the parse trees, and thus can provide more syntactic information than Min Expansion or Simple Expansion.
if the texts are less-formally structured as those in BNews, Full-Expansion would inevitably involve more noises and thus adversely affect the resolution performance.
contrasting
train_484
As shown in Table 5, the two grammaticalrole features are important for the pronoun resolution: removing these features results in up to 5.7% (NWire) decrease in success.
when the structured feature is included, the loss in success reduces to 1.9% and 1.1% for NWire and BNews, and a slight improvement can even be achieved for NPaper.
contrasting
train_485
In line with the reports in (Luo and Zitouni, 2005) we do observe the performance improvement against the baseline (NORM) for all the domains.
the increase in the success rates (up to 1.3%) is not so large as by adding the structured feature (NORM+S Simple) instead.
contrasting
train_486
We can see that Charniak (2000)'s parser leads to higher success rates for NPaper and BNews, while 's achieves better results for NWire.
the difference between the results of the two parsers is not significant (less than 2% success) for the three domains, no matter whether the structured feature is used alone or in combination.
contrasting
train_487
Traditionally, syntactic information from parse trees is represented as a set of flat features.
the features are usually selected and defined by heuristics and may not necessarily capture all the syntactic information provided by the parse trees.
contrasting
train_488
In the psycholinguistics literature, the comparative difficulty of object-relative clauses has been explained in terms of verbal working memory (King and Just, 1991), distance between the gap and the filler (Bever and McElree, 1988), or perspective shifting (MacWhinney, 1982).
the test results in this study provide a simpler account for the effect.
contrasting
train_489
Corley and Crocker clearly state that their model is strictly limited to lexical ambiguity resolution, and their test of the model was bounded to the noun-verb ambiguity.
the findings in the current study play out differently.
contrasting
train_490
In this conception, a language (be it natural or not) is produced (or generated) by a grammar by means of a specific mechanism, for example derivation.
when no structure can be built, nothing can be said about the input to be parsed except, eventually, the origin of the failure.
contrasting
train_491
This information is computed as follows: Let C a construction defined in the grammar by means of a set of properties S C , let A C an assignment for the construction C, • Satisfaction ratio (SR): the number of satisfied properties divided by the number of eval- The SR value varies between 0 and 1, the two extreme values indicating that no properties are satisfied (SR=0) or none of them are violated (SR=1).
sR only relies on the evaluated properties.
contrasting
train_492
Let's imagine the case where 7 constraints can be evaluated for both constructions, with an equal SR.
the two constructions do not have the same quality: one relies on the evaluation of all the possible constraints (in the PP) whereas the other only uses a few of them (in the VP).
contrasting
train_493
The experiment described in the next section will show that this weighting level seems to be efficient enough.
in case of necessity, it remains possible to weight directly some constraints into a given construction, overriding thus the default weight assigned to the constraint types.
contrasting
train_494
These three indicators do not have the same impact in the evaluation of the characterization, they are then balanced with coefficients in the normalized formula: • P I = (k×QI)+(l×SR)+(m×CC) As such, P I constitutes an evaluation of the characterization for a given construction.
it is necessary to take into account the "quality" of the constituents of the construction as well.
contrasting
train_495
In the end, the GI index of the VP is better than that of the ill-formed NP: For the same reasons, the higher level construction S also compensates the bad score of the NP.
in the end, the final GI of the sentence is much lower than that of the corresponding wellformed sentence (see above The different figures of the sentence (2b) show that the violation of a unique constraint (in this case the Oblig(Adj) indicating the absence of the head in the AP) can lead to a global lower GI than the violation of two heavy constraints as for (2a).
contrasting
train_496
Typically CRFs use binary indicator functions as features; these functions are only active when the observations meet some criteria and the label a t (or label pair, (a t−1 , a t )) matches a pre-specified label (pair).
in our model the labellings are word indices in the target sentence and cannot be compared readily to labellings at other sites in the same sentence, or in other sentences with a different length.
contrasting
train_497
The final results of 6.47 and 5.19 with and without Model 4 features both exceed the performance of Model 4 alone.
the unsuper- vised Model 4 did not have access to the wordalignments in our training set.
contrasting
train_498
(2005) who presented a word matching model for discriminative alignment which they they were able to solve optimally.
their model is limited to only providing one-to-one alignments.
contrasting
train_499
Existing methods for exploiting comparable corpora look for parallel data at the sentence level.
we believe that very non-parallel corpora have none or few good sentence pairs; most of their parallel data exists at the sub-sentential level.
contrasting