lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9061
  • Answer: {'precision': 0.8622273249138921, 'recall': 0.9192166462668299, 'f1': 0.8898104265402843, 'number': 817}
  • Header: {'precision': 0.6585365853658537, 'recall': 0.453781512605042, 'f1': 0.5373134328358209, 'number': 119}
  • Question: {'precision': 0.8928892889288929, 'recall': 0.9210770659238626, 'f1': 0.9067641681901281, 'number': 1077}
  • Overall Precision: 0.8706
  • Overall Recall: 0.8927
  • Overall F1: 0.8815
  • Overall Accuracy: 0.7959

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4359 10.5263 200 1.0671 {'precision': 0.8490338164251208, 'recall': 0.8604651162790697, 'f1': 0.8547112462006079, 'number': 817} {'precision': 0.5862068965517241, 'recall': 0.5714285714285714, 'f1': 0.5787234042553192, 'number': 119} {'precision': 0.8636363636363636, 'recall': 0.9173630454967502, 'f1': 0.889689329131022, 'number': 1077} 0.8424 0.8738 0.8578 0.7904
0.0471 21.0526 400 1.3296 {'precision': 0.8695136417556346, 'recall': 0.8971848225214198, 'f1': 0.883132530120482, 'number': 817} {'precision': 0.5606060606060606, 'recall': 0.6218487394957983, 'f1': 0.5896414342629481, 'number': 119} {'precision': 0.9047619047619048, 'recall': 0.8820798514391829, 'f1': 0.8932769158439117, 'number': 1077} 0.8677 0.8728 0.8702 0.8099
0.0129 31.5789 600 1.5594 {'precision': 0.8595238095238096, 'recall': 0.8837209302325582, 'f1': 0.8714544357272178, 'number': 817} {'precision': 0.559322033898305, 'recall': 0.5546218487394958, 'f1': 0.5569620253164557, 'number': 119} {'precision': 0.8612068965517241, 'recall': 0.9275766016713092, 'f1': 0.8931604827894501, 'number': 1077} 0.8437 0.8877 0.8652 0.7923
0.0072 42.1053 800 1.5918 {'precision': 0.8317046688382194, 'recall': 0.9375764993880049, 'f1': 0.8814729574223245, 'number': 817} {'precision': 0.6590909090909091, 'recall': 0.48739495798319327, 'f1': 0.5603864734299517, 'number': 119} {'precision': 0.9073900841908326, 'recall': 0.9006499535747446, 'f1': 0.9040074557315937, 'number': 1077} 0.8633 0.8912 0.8770 0.8019
0.004 52.6316 1000 1.5382 {'precision': 0.8539976825028969, 'recall': 0.9020807833537332, 'f1': 0.8773809523809523, 'number': 817} {'precision': 0.6761904761904762, 'recall': 0.5966386554621849, 'f1': 0.6339285714285715, 'number': 119} {'precision': 0.8953703703703704, 'recall': 0.8978644382544104, 'f1': 0.8966156699119147, 'number': 1077} 0.8667 0.8818 0.8742 0.8137
0.0022 63.1579 1200 1.5363 {'precision': 0.8724672228843862, 'recall': 0.8959608323133414, 'f1': 0.8840579710144928, 'number': 817} {'precision': 0.6153846153846154, 'recall': 0.5378151260504201, 'f1': 0.5739910313901345, 'number': 119} {'precision': 0.8801431127012522, 'recall': 0.9136490250696379, 'f1': 0.8965831435079726, 'number': 1077} 0.8637 0.8843 0.8738 0.7988
0.0012 73.6842 1400 1.8518 {'precision': 0.8717647058823529, 'recall': 0.9069767441860465, 'f1': 0.8890221955608878, 'number': 817} {'precision': 0.6601941747572816, 'recall': 0.5714285714285714, 'f1': 0.6126126126126127, 'number': 119} {'precision': 0.8940639269406393, 'recall': 0.9090064995357474, 'f1': 0.9014732965009209, 'number': 1077} 0.8730 0.8882 0.8806 0.7926
0.0015 84.2105 1600 1.7207 {'precision': 0.8812121212121212, 'recall': 0.8898408812729498, 'f1': 0.8855054811205847, 'number': 817} {'precision': 0.6236559139784946, 'recall': 0.48739495798319327, 'f1': 0.5471698113207547, 'number': 119} {'precision': 0.9014732965009208, 'recall': 0.9090064995357474, 'f1': 0.9052242256125752, 'number': 1077} 0.8802 0.8763 0.8783 0.8030
0.0007 94.7368 1800 1.9117 {'precision': 0.8469387755102041, 'recall': 0.9143206854345165, 'f1': 0.8793407886992348, 'number': 817} {'precision': 0.6222222222222222, 'recall': 0.47058823529411764, 'f1': 0.5358851674641149, 'number': 119} {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} 0.8652 0.8897 0.8773 0.7942
0.0006 105.2632 2000 1.9061 {'precision': 0.8622273249138921, 'recall': 0.9192166462668299, 'f1': 0.8898104265402843, 'number': 817} {'precision': 0.6585365853658537, 'recall': 0.453781512605042, 'f1': 0.5373134328358209, 'number': 119} {'precision': 0.8928892889288929, 'recall': 0.9210770659238626, 'f1': 0.9067641681901281, 'number': 1077} 0.8706 0.8927 0.8815 0.7959
0.0002 115.7895 2200 1.8430 {'precision': 0.8524404086265607, 'recall': 0.9192166462668299, 'f1': 0.8845700824499411, 'number': 817} {'precision': 0.6057692307692307, 'recall': 0.5294117647058824, 'f1': 0.5650224215246636, 'number': 119} {'precision': 0.8828748890860693, 'recall': 0.9238625812441968, 'f1': 0.9029038112522687, 'number': 1077} 0.8565 0.8987 0.8771 0.7977
0.0003 126.3158 2400 1.8246 {'precision': 0.8732394366197183, 'recall': 0.9106487148102815, 'f1': 0.8915518274415818, 'number': 817} {'precision': 0.6354166666666666, 'recall': 0.5126050420168067, 'f1': 0.5674418604651162, 'number': 119} {'precision': 0.8830823737821081, 'recall': 0.9257195914577531, 'f1': 0.9038984587488668, 'number': 1077} 0.8676 0.8952 0.8812 0.7990

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
130M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for MannR/lilt-en-funsd

Finetuned
(47)
this model