Gonalb commited on
Commit
bbbcbbb
Β·
1 Parent(s): 4612ed9

complete readme

Browse files
Files changed (1) hide show
  1. README.md +342 -4
README.md CHANGED
@@ -1,10 +1,348 @@
1
  ---
2
- title: Coolapp
3
- emoji: πŸƒ
4
- colorFrom: indigo
5
  colorTo: purple
6
  sdk: docker
7
  pinned: false
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: DeployPythonicRAG
3
+ emoji: πŸ“‰
4
+ colorFrom: blue
5
  colorTo: purple
6
  sdk: docker
7
  pinned: false
8
+ license: apache-2.0
9
  ---
10
 
11
+ # Deploying Pythonic Chat With Your Text File Application
12
+
13
+ In today's breakout rooms, we will be following the process that you saw during the challenge.
14
+
15
+ Today, we will repeat the same process - but powered by our Pythonic RAG implementation we created last week.
16
+
17
+ You'll notice a few differences in the `app.py` logic - as well as a few changes to the `aimakerspace` package to get things working smoothly with Chainlit.
18
+
19
+ > NOTE: If you want to run this locally - be sure to use `uv sync`, and then `uv run chainlit run app.py` to start the application outside of Docker.
20
+
21
+ ## Reference Diagram (It's Busy, but it works)
22
+
23
+ ![image](https://i.imgur.com/IaEVZG2.png)
24
+
25
+ ### Anatomy of a Chainlit Application
26
+
27
+ [Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
28
+
29
+ The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).
30
+
31
+ > NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
32
+
33
+ We'll be concerning ourselves with three main scopes:
34
+
35
+ 1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
36
+ 2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
37
+ 3. On message - when the users sends a message through the input text box in the Chainlit UI
38
+
39
+ Let's dig into each scope and see what we're doing!
40
+
41
+ ### On Application Start:
42
+
43
+ The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
44
+
45
+ ```python
46
+ import os
47
+ from typing import List
48
+ from chainlit.types import AskFileResponse
49
+ from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
50
+ from aimakerspace.openai_utils.prompts import (
51
+ UserRolePrompt,
52
+ SystemRolePrompt,
53
+ AssistantRolePrompt,
54
+ )
55
+ from aimakerspace.openai_utils.embedding import EmbeddingModel
56
+ from aimakerspace.vectordatabase import VectorDatabase
57
+ from aimakerspace.openai_utils.chatmodel import ChatOpenAI
58
+ import chainlit as cl
59
+ ```
60
+
61
+ Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
62
+
63
+ ```python
64
+ system_template = """\
65
+ Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
66
+ system_role_prompt = SystemRolePrompt(system_template)
67
+
68
+ user_prompt_template = """\
69
+ Context:
70
+ {context}
71
+
72
+ Question:
73
+ {question}
74
+ """
75
+ user_role_prompt = UserRolePrompt(user_prompt_template)
76
+ ```
77
+
78
+ > NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
79
+
80
+ Following that - we can create the Python Class definition for our RAG pipeline - or *chain*, as we'll refer to it in the rest of this walkthrough.
81
+
82
+ Let's look at the definition first:
83
+
84
+ ```python
85
+ class RetrievalAugmentedQAPipeline:
86
+ def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
87
+ self.llm = llm
88
+ self.vector_db_retriever = vector_db_retriever
89
+
90
+ async def arun_pipeline(self, user_query: str):
91
+ ### RETRIEVAL
92
+ context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
93
+
94
+ context_prompt = ""
95
+ for context in context_list:
96
+ context_prompt += context[0] + "\n"
97
+
98
+ ### AUGMENTED
99
+ formatted_system_prompt = system_role_prompt.create_message()
100
+
101
+ formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
102
+
103
+
104
+ ### GENERATION
105
+ async def generate_response():
106
+ async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
107
+ yield chunk
108
+
109
+ return {"response": generate_response(), "context": context_list}
110
+ ```
111
+
112
+ Notice a few things:
113
+
114
+ 1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
115
+ 2. In essence, our pipeline is *chaining* a few events together:
116
+ 1. We take our user query, and chain it into our Vector Database to collect related chunks
117
+ 2. We take those contexts and our user's questions and chain them into the prompt templates
118
+ 3. We take that prompt template and chain it into our LLM call
119
+ 4. We chain the response of the LLM call to the user
120
+ 3. We are using a lot of `async` again!
121
+
122
+ Now, we're going to create a helper function for processing uploaded text files.
123
+
124
+ First, we'll instantiate a shared `CharacterTextSplitter`.
125
+
126
+ ```python
127
+ text_splitter = CharacterTextSplitter()
128
+ ```
129
+
130
+ Now we can define our helper.
131
+
132
+ ```python
133
+ def process_file(file: AskFileResponse):
134
+ import tempfile
135
+ import shutil
136
+
137
+ print(f"Processing file: {file.name}")
138
+
139
+ # Create a temporary file with the correct extension
140
+ suffix = f".{file.name.split('.')[-1]}"
141
+ with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as temp_file:
142
+ # Copy the uploaded file content to the temporary file
143
+ shutil.copyfile(file.path, temp_file.name)
144
+ print(f"Created temporary file at: {temp_file.name}")
145
+
146
+ # Create appropriate loader
147
+ if file.name.lower().endswith('.pdf'):
148
+ loader = PDFLoader(temp_file.name)
149
+ else:
150
+ loader = TextFileLoader(temp_file.name)
151
+
152
+ try:
153
+ # Load and process the documents
154
+ documents = loader.load_documents()
155
+ texts = text_splitter.split_texts(documents)
156
+ return texts
157
+ finally:
158
+ # Clean up the temporary file
159
+ try:
160
+ os.unlink(temp_file.name)
161
+ except Exception as e:
162
+ print(f"Error cleaning up temporary file: {e}")
163
+ ```
164
+
165
+ Simply put, this downloads the file as a temp file, we load it in with `TextFileLoader` and then split it with our `TextSplitter`, and returns that list of strings!
166
+
167
+ #### ❓ QUESTION #1:
168
+
169
+ - Why do we want to support streaming? What about streaming is important, or useful?
170
+ - Because it improves user experience. Streaming allows for immediate feedback, reduces the perceived latency and mimics natural human conversation.
171
+
172
+ ### On Chat Start:
173
+
174
+ The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
175
+
176
+ You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
177
+
178
+ ```python
179
+ while files == None:
180
+ files = await cl.AskFileMessage(
181
+ content="Please upload a Text or PDF file to begin!",
182
+ accept=["text/plain", "application/pdf"],
183
+ max_size_mb=2,
184
+ timeout=180,
185
+ ).send()
186
+ ```
187
+
188
+ Once we've obtained the text file - we'll use our processing helper function to process our text!
189
+
190
+ After we have processed our text file - we'll need to create a `VectorDatabase` and populate it with our processed chunks and their related embeddings!
191
+
192
+ ```python
193
+ vector_db = VectorDatabase()
194
+ vector_db = await vector_db.abuild_from_list(texts)
195
+ ```
196
+
197
+ Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
198
+
199
+ ```python
200
+ retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
201
+ vector_db_retriever=vector_db,
202
+ llm=chat_openai
203
+ )
204
+ ```
205
+
206
+ Now, we'll save that into our user session!
207
+
208
+ > NOTE: Chainlit has some great documentation about [User Session](https://docs.chainlit.io/concepts/user-session).
209
+
210
+ #### ❓ QUESTION #2:
211
+
212
+ - Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
213
+ - Beacause User Session provides a way to isolate the data for each user. Each session is tied to a specific user or chat instance, so their context is independent of others.
214
+ The primary reason for using a User Session instead of a global variable is to ensure that each user's interactions with the chat application are isolated, avoiding interference between multiple users.
215
+
216
+ ### On Message
217
+
218
+ First, we load our chain from the user session:
219
+
220
+ ```python
221
+ chain = cl.user_session.get("chain")
222
+ ```
223
+
224
+ Then, we run the chain on the content of the message - and stream it to the front end - that's it!
225
+
226
+ ```python
227
+ msg = cl.Message(content="")
228
+ result = await chain.arun_pipeline(message.content)
229
+
230
+ async for stream_resp in result["response"]:
231
+ await msg.stream_token(stream_resp)
232
+ ```
233
+
234
+ ### πŸŽ‰
235
+
236
+ With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
237
+
238
+ ## Deploying the Application to Hugging Face Space
239
+
240
+ Due to the way the repository is created - it should be straightforward to deploy this to a Hugging Face Space!
241
+
242
+ > NOTE: If you wish to go through the local deployments using `uv run chainlit run app.py` and Docker - please feel free to do so!
243
+
244
+ <details>
245
+ <summary>Creating a Hugging Face Space</summary>
246
+
247
+ 1. Navigate to the `Spaces` tab.
248
+
249
+ ![image](https://i.imgur.com/aSMlX2T.png)
250
+
251
+ 2. Click on `Create new Space`
252
+
253
+ ![image](https://i.imgur.com/YaSSy5p.png)
254
+
255
+ 3. Create the Space by providing values in the form. Make sure you've selected "Docker" as your Space SDK.
256
+
257
+ ![image](https://i.imgur.com/6h9CgH6.png)
258
+
259
+ </details>
260
+
261
+ <details>
262
+ <summary>Adding this Repository to the Newly Created Space</summary>
263
+
264
+ 1. Collect the SSH address from the newly created Space.
265
+
266
+ ![image](https://i.imgur.com/Oag0m8E.png)
267
+
268
+ > NOTE: The address is the component that starts with `[email protected]:spaces/`.
269
+
270
+ 2. Use the command:
271
+
272
+ ```bash
273
+ git remote add hf HF_SPACE_SSH_ADDRESS_HERE
274
+ ```
275
+
276
+ 3. Use the command:
277
+
278
+ ```bash
279
+ git pull hf main --no-rebase --allow-unrelated-histories -X ours
280
+ ```
281
+
282
+ 4. Use the command:
283
+
284
+ ```bash
285
+ git add .
286
+ ```
287
+
288
+ 5. Use the command:
289
+
290
+ ```bash
291
+ git commit -m "Deploying Pythonic RAG"
292
+ ```
293
+
294
+ 6. Use the command:
295
+
296
+ ```bash
297
+ git push hf main
298
+ ```
299
+
300
+ 7. The Space should automatically build as soon as the push is completed!
301
+
302
+ > NOTE: The build will fail before you complete the following steps!
303
+
304
+ </details>
305
+
306
+ <details>
307
+ <summary>Adding OpenAI Secrets to the Space</summary>
308
+
309
+ 1. Navigate to your Space settings.
310
+
311
+ ![image](https://i.imgur.com/zh0a2By.png)
312
+
313
+ 2. Navigate to `Variables and secrets` on the Settings page and click `New secret`:
314
+
315
+ ![image](https://i.imgur.com/g2KlZdz.png)
316
+
317
+ 3. In the `Name` field - input `OPENAI_API_KEY` in the `Value (private)` field, put your OpenAI API Key.
318
+
319
+ ![image](https://i.imgur.com/eFcZ8U3.png)
320
+
321
+ 4. The Space will begin rebuilding!
322
+
323
+ </details>
324
+
325
+ ## πŸŽ‰
326
+
327
+ You just deployed Pythonic RAG!
328
+
329
+ Try uploading a text file and asking some questions!
330
+
331
+ #### ❓ Discussion Question #1:
332
+
333
+ Upload a PDF file of the recent DeepSeek-R1 paper and ask the following questions:
334
+
335
+ 1. What is RL and how does it help reasoning?
336
+ 2. What is the difference between DeepSeek-R1 and DeepSeek-R1-Zero?
337
+ 3. What is this paper about?
338
+
339
+ Does this application pass your vibe check? Are there any immediate pitfalls you're noticing?
340
+ Does not pass the vibe check for me. The limitation that i found is that the app fails with generalist questions like the last one.
341
+
342
+ ## 🚧 CHALLENGE MODE 🚧
343
+
344
+ For the challenge mode, please instead create a simple FastAPI backend with a simple React (or any other JS framework) frontend.
345
+
346
+ You can use the same prompt templates and RAG pipeline as we did here - but you'll need to modify the code to work with FastAPI and React.
347
+
348
+ Deploy this application to Hugging Face Spaces!