Skip to content

Commit 9a78d2a

Browse files
committedOct 1, 2023
add openai files
1 parent 0c694e5 commit 9a78d2a

File tree

4 files changed

+212
-0
lines changed

4 files changed

+212
-0
lines changed
 

‎ai/openai/introduction/README.md

+151
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
# Introduction to Open AI
2+
3+
## Overview
4+
5+
What is [Open AI](https://openai.com/) ?
6+
7+
* Research company on AI development
8+
* Builds and provides models
9+
* Builds and provides a standard protocol for using AI
10+
11+
What is a model ?
12+
13+
I see a model as a language super database. </br>
14+
Instead of writing a query, that is slow to query a traditional database like SQL, you can throw a question at a model and it gives you an answer really fast </br>
15+
16+
Model examples:
17+
* GPT 3.5
18+
* GPT 4
19+
20+
## Getting started
21+
22+
The best way to get started and to understand OpenAI, is to learn hands on
23+
24+
* Create an OpenAI account [here](https://openai.com/)
25+
26+
## Chat GPT
27+
28+
Here you can find the link to [ChatGPT](https://chat.openai.com/)
29+
30+
## Open AI Playground
31+
32+
Here you can find the link to the [OpenAI Playground](https://platform.openai.com/playground)
33+
34+
## Build an AI powered app
35+
36+
We can start with a `main.py` that reads a message
37+
38+
```
39+
import sys
40+
41+
message = sys.argv[0]
42+
43+
```
44+
Then we will need the code from the Open AI playground and add it to our `main.py`. </br>
45+
Move the `import` statements to the top </br>
46+
47+
Once you have tidied up everything, you can get the response message from the AI:
48+
49+
```
50+
responseMessage = response.choices[0].message.content
51+
```
52+
53+
Let's build our app
54+
55+
```
56+
cd ai\openai\introduction
57+
docker build . -t ai-app
58+
```
59+
60+
Set my OpenAI API key
61+
62+
```
63+
$ENV:OPENAI_API_KEY=""
64+
```
65+
66+
Run our AI App:
67+
68+
```
69+
docker run -it -e OPENAI_API_KEY=$ENV:OPENAI_API_KEY ai-app
70+
```
71+
72+
When we run the app, notice it has no concept of memory. </br>
73+
The playground works because it keeps track of all the user and AI messages and keeps appending new messages to it </br>
74+
So it can track the conversation.
75+
76+
Let's keep track of messages, by writing it to a local file </br>
77+
We will also take the system message out and keep it as a constant in our code </br>
78+
79+
Full example:
80+
81+
```
82+
import sys
83+
import os
84+
import json
85+
import openai
86+
87+
openai.api_key = os.getenv("OPENAI_API_KEY")
88+
89+
#read the incoming message
90+
message = sys.argv[1]
91+
user_message = {
92+
"role" : "user",
93+
"content" : message
94+
}
95+
96+
systemMessage = {
97+
"role": "system",
98+
"content": "You are a kubernetes exper that can assist developers with troubleshooting deployments\n\nTo help the developer you will need to know the namespaces as well as the pod name. Ask for missing information\n\nGenerate a command to help the developer surface logs or information\n"
99+
}
100+
101+
# read the cached user messages if there are any
102+
userMessages = []
103+
if os.path.isfile("messages.json"):
104+
with open('messages.json', newline='') as messagesFile:
105+
data = messagesFile.read()
106+
userMessages = json.loads(data)
107+
108+
# add the new message to it and update the cached messages
109+
userMessages.append(user_message)
110+
with open('messages.json', 'w', newline='') as messagesFile:
111+
msgJSON = json.dumps(userMessages)
112+
messagesFile.write(msgJSON)
113+
print(msgJSON)
114+
115+
messages = []
116+
messages.append(systemMessage)
117+
messages.extend(userMessages)
118+
119+
response = openai.ChatCompletion.create(
120+
model="gpt-3.5-turbo",
121+
messages=messages,
122+
temperature=1,
123+
max_tokens=256,
124+
top_p=1,
125+
frequency_penalty=0,
126+
presence_penalty=0
127+
)
128+
129+
responseMessage = response.choices[0].message.content
130+
print(responseMessage)
131+
132+
```
133+
134+
Now we can mount our volume so we persist the cache of messages
135+
136+
```
137+
docker run -it -e OPENAI_API_KEY=$ENV:OPENAI_API_KEY -v ${PWD}:/app ai-app "can you help me with my deployment?"
138+
Of course! I'd be happy to help with your deployment. Could you please provide me with the namespace and the name of the pod you're encountering issues with?
139+
140+
docker run -it -e OPENAI_API_KEY=$ENV:OPENAI_API_KEY -v ${PWD}:/app ai-app "my pod is pod-123"
141+
Sure, I can help you with your deployment. Can you please provide me with the namespace in which the pod is running?
142+
143+
docker run -it -e OPENAI_API_KEY=$ENV:OPENAI_API_KEY -v ${PWD}:/app ai-app "its in the products namespace"
144+
Great! To surface the logs for the pod "pod-123" in the "products" namespace, you can use the following command:
145+
146+
```shell
147+
kubectl logs -n products pod-123
148+
```
149+
150+
This command will retrieve the logs for the specified pod in the given namespace. Make sure you have the necessary permissions to access the namespace.
151+
```

‎ai/openai/introduction/dockerfile

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
FROM python:3.11-alpine
2+
3+
RUN mkdir /app
4+
WORKDIR /app
5+
6+
COPY requirements.txt /app/requirements.txt
7+
RUN pip install -r requirements.txt
8+
9+
COPY main.py /app/
10+
11+
ENTRYPOINT ["python3", "main.py"]

‎ai/openai/introduction/main.py

+49
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
import sys
2+
import os
3+
import json
4+
import openai
5+
6+
openai.api_key = os.getenv("OPENAI_API_KEY")
7+
8+
#read the incoming message
9+
message = sys.argv[1]
10+
user_message = {
11+
"role" : "user",
12+
"content" : message
13+
}
14+
15+
systemMessage = {
16+
"role": "system",
17+
"content": "You are a kubernetes exper that can assist developers with troubleshooting deployments\n\nTo help the developer you will need to know the namespaces as well as the pod name. Ask for missing information\n\nGenerate a command to help the developer surface logs or information\n"
18+
}
19+
20+
# read the cached user messages if there are any
21+
userMessages = []
22+
if os.path.isfile("messages.json"):
23+
with open('messages.json', newline='') as messagesFile:
24+
data = messagesFile.read()
25+
userMessages = json.loads(data)
26+
27+
# add the new message to it and update the cached messages
28+
userMessages.append(user_message)
29+
with open('messages.json', 'w', newline='') as messagesFile:
30+
msgJSON = json.dumps(userMessages)
31+
messagesFile.write(msgJSON)
32+
print(msgJSON)
33+
34+
messages = []
35+
messages.append(systemMessage)
36+
messages.extend(userMessages)
37+
38+
response = openai.ChatCompletion.create(
39+
model="gpt-3.5-turbo",
40+
messages=messages,
41+
temperature=1,
42+
max_tokens=256,
43+
top_p=1,
44+
frequency_penalty=0,
45+
presence_penalty=0
46+
)
47+
48+
responseMessage = response.choices[0].message.content
49+
print(responseMessage)
+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
openai==0.28.0

0 commit comments

Comments
 (0)
Please sign in to comment.