Dataset documentation

#1
by yjernite HF staff - opened

This looks like a super useful resource, it would be great to have a dataset card with it! Here's the relevant doc: https://huggingface.co/docs/hub/datasets-cards

In the meantime linking the Twitter announcement thread from @conceptofmind for context: https://twitter.com/EnricoShippole/status/1766157358672359862

Hi Yacine,

We will be updating all of the datasets with model cards and licenses soon.

Just waiting on final comments from the CAP team.

Thank you,

Enrico Shippole

I have updated the dataset with documentation and a license.

@conceptofmind I was generating new dense and sparse embeddings for this dataset, I had noticed that some of the "text" fields are duplicates, and I wanted to know if I should omit those rows with duplicate "text" fields or if there are perhaps parallel citations for the same "text" content and therefore they should not be deduplicated.

@endomorphosis

Each opinion has several text fields that will be populated depending on the cluster's source field. For example, scraped opinions tend not to have great text while those from the Harvard corpus do. The best way to get the text for an opinion is to choose the first populated fields from the list below (from best to worst):
html_with_citations is generated by finding citations in the text of the other fields. All items should eventually have this field, though it can be empty initially or if our citation lookup utility fails. In general, this field is used to generate pages on CourtListener.
html_columbia will be populated if we got the content from the Columbia collaboration.
html_lawbox will be populated if we got the content from the Lawbox donation.
xml_harvard will be populated if the source was Harvard's Caselaw Access Project. This field has a lot of data but is not always perfect due to being created by OCR instead of by humans.
html_anon_2020 will be populated if we got the content from our anonymous source in 2020.
html will be populated if we got the opinion from a court's website as a Word Perfect or HTML document, or if we got the opinion from Resource.org, which provided HTML documents.
plain_text will be populated if we got the opinion from a court's website as a PDF or Microsoft Word document.

from https://www.courtlistener.com/help/api/rest/case-law/#opinion-endpoint

@rbp1

See :
https://huggingface.co/datasets/laion/Caselaw_Access_Project_embeddings/tree/main
Dense embeddings upto 32k tokens is done.

I will get to finishing the and publishing the sparse embeddings hopefully by the end of the month.

cool

the issue for me with the CAP database is that it only goes back to 1860 (iirc). this COLD dataset goes back to the 17th century, which is what I need.

https://huggingface.co/datasets/harvard-lil/cold-cases

I will consider that when I start with the graphrag implementation, and start to construct the knowledge graph.

email me if you'd like to talk about a collab [email protected]

Sign up or log in to comment