File size: 4,769 Bytes
e63103b
 
 
 
 
 
 
3f199c2
68d3cc8
0870c96
e63103b
4d3bceb
 
 
 
e63103b
a37a365
3f199c2
e63103b
0870c96
 
b700f35
0870c96
e63103b
b700f35
 
 
 
 
 
 
 
 
0870c96
4d3bceb
 
0870c96
 
 
e63103b
 
3f199c2
0870c96
3f199c2
0870c96
 
e63103b
 
e79797a
 
e63103b
 
 
 
1a93363
4d3bceb
e63103b
 
 
 
 
 
 
 
 
4d3bceb
 
 
 
 
 
 
 
 
 
 
 
4dcf767
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
from langchain_community.document_loaders import PyPDFLoader
import os
from langchain_openai import ChatOpenAI
from langchain_chroma import Chroma
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
from langchain_huggingface import HuggingFaceEndpoint, HuggingFaceEmbeddings
from setup.environment import default_model
from uuid import uuid4

os.environ["LANGCHAIN_TRACING_V2"]="true"
os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com"
os.environ.get("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_PROJECT"]="VELLA"
os.environ.get("OPENAI_API_KEY")
os.environ.get("HUGGINGFACEHUB_API_TOKEN")
embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")

allIds = []

def getPDF(file_paths):
  documentId = 0
  text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
  pages = []
  for file in file_paths:
    loader = PyPDFLoader(file, extract_images=False)
    pagesDoc = loader.load_and_split(text_splitter)
    pages = pages + pagesDoc
    
    
  # loader = PyPDFLoader(file_paths, extract_images=False)
  # pages = loader.load_and_split(text_splitter)
  for page in pages:
    # print('\n')
    # print('allIds: ', allIds)
    documentId = str(uuid4())
    allIds.append(documentId)
    page.id = documentId
  return pages

def create_retriever(documents, vectorstore):
  print('\n\n')
  print('documents: ', documents[:2])

  vectorstore.add_documents(documents=documents)

  retriever = vectorstore.as_retriever(
      # search_type="similarity",
      # search_kwargs={"k": 3},
  )
  
  return retriever

def create_prompt_llm_chain(system_prompt, modelParam):
  model = create_llm(modelParam)

  system_prompt = system_prompt + "\n\n" + "{context}"
  prompt = ChatPromptTemplate.from_messages(
      [
          ("system", system_prompt),
          ("human", "{input}"),
      ]
  )
  question_answer_chain = create_stuff_documents_chain(model, prompt)
  return question_answer_chain

def create_llm(modelParam):
  if modelParam == default_model:
    return ChatOpenAI(model=modelParam)
  else:
    return HuggingFaceEndpoint(
        repo_id=modelParam,
        task="text-generation",
        # max_new_tokens=100,
        do_sample=False,
        huggingfacehub_api_token=os.environ.get("HUGGINGFACEHUB_API_TOKEN")
    )


DEFAULT_SYSTEM_PROMPT = """

You are a highly knowledgeable legal assistant specializing in case summarization. Your task is to provide comprehensive and accurate summaries of legal cases while maintaining a professional and objective demeanor. Always approach each case with careful consideration and analytical rigor.

First, you will be given a document to analyze:

Next, you will receive a specific request for summarization:

<summary_request>
{{resuma esse memorial}}
</summary_request>

Before providing your summary, follow these steps:

1. Argumentation Mining: Conduct a cross-Document Argument Analysis to identify the main arguments, claims, and supporting evidence within the document. Focus on extracting the most relevant information related to the summary request.

2. Socratic Questioning: Reflect on your initial findings using the Socratic method. Ask yourself probing questions to challenge your assumptions and deepen your understanding of the document's content. For example:
 - What are the key points I've identified?
 - Are there any counterarguments or alternative perspectives I've overlooked?
 - How does this information relate to the specific summary request?
 - What additional context might be necessary to fully understand these points?

3. Maximal Marginal Relevance: Apply the principles of Maximal Marginal Relevance to ensure your summary includes diverse, relevant information while avoiding redundancy. Prioritize information that is both relevant to the summary request and adds new insights not already covered.

After completing these steps, provide your summary in the following format:

<summary>
{
 "nome_do_memorial": "",
 "argumentos": "",
 "jurisprudencia": "",
 "doutrina": "",
 "palavras_chave": [
 ]
}
</summary>

Remember:
- Always prioritize relevance to the summary request.
- Be concise and avoid unnecessary verbosity.
- Ensure your summary is well-structured and easy to understand.
- Do not include any personal opinions or information not present in the original document.
- If the summary request asks for a specific focus or perspective, make sure to address it directly.

Your goal is to provide a comprehensive yet concise summary that accurately represents the document's content while meeting the specific needs outlined in the summary request.
"""