I recently put together a workshop called StarWarsCopilot, and I wanted to turn it into a proper blog series so you can follow along at your own pace.
This is lesson 1 of 8.
In this first step we build the foundation: a .NET console app that can send prompts to an LLM and print responses back to the terminal.
Lessons in this series
Before you start (self-setup)
If you’re following the series on your own, start with lesson 0.
For this lesson specifically, you need:
- an Azure subscription with permission to create AI resources
- Azure OpenAI access in a supported region
- a deployed chat model (for example
gpt-5-mini) - .NET 10 SDK installed locally
Setup notes for this lesson
- In the Azure portal, create an Azure OpenAI resource.
- Open the resource and launch Azure AI Foundry for that resource.
- Deploy a chat model (for example
gpt-5-mini). - Copy:
- your Azure OpenAI endpoint (for example
https://<your-resource>.openai.azure.com) - an API key
- your deployment name (used as model name in this lesson)
- your Azure OpenAI endpoint (for example
Then configure your app secrets:
dotnet user-secrets init
dotnet user-secrets set "OpenAI:Endpoint" "https://<your-resource>.openai.azure.com"
dotnet user-secrets set "OpenAI:APIKey" "<your-api-key>"
dotnet user-secrets set "OpenAI:ModelName" "<your-chat-deployment-name>"
Quick verification:
dotnet user-secrets list
You should see the three OpenAI:* keys before moving on.
What we’re building
The app is intentionally simple:
- Read user input from the console
- Send it to a model via
IChatClient - Print the assistant response
- Repeat until the user exits
Under the hood this uses Microsoft.Extensions.AI, which gives us a clean abstraction over different model providers.
Project setup
The workshop starts by scaffolding a console app and installing packages for:
- LLM access (
Microsoft.Extensions.AI,Microsoft.Extensions.AI.OpenAI) - Azure OpenAI connectivity (
Azure.AI.OpenAI) - Config and secrets (
Microsoft.Extensions.Configuration.UserSecrets) - Logging (
Microsoft.Extensions.Logging.Console)
Then we store model details in user secrets:
dotnet user-secrets set "OpenAI:Endpoint" "..."
dotnet user-secrets set "OpenAI:APIKey" "..."
dotnet user-secrets set "OpenAI:ModelName" "gpt-5-mini"
I like this pattern because it keeps credentials out of source and makes swapping models easy later.
Connecting to the model
The key flow is:
- Load endpoint/key/model from configuration
- Create
AzureOpenAIClient - Convert it to
IChatClient - Wrap with logging middleware
- Call
GetResponseAsync
That “convert to IChatClient” part is the important design choice. It means we can switch providers later without rewriting the entire app loop.
First interactive loop
Once wired up, the app is basically:
while (true)
{
Console.Write("User > ");
var userInput = Console.ReadLine();
if (string.IsNullOrWhiteSpace(userInput))
break;
var result = await chatClient.GetResponseAsync(userInput);
Console.WriteLine("Assistant > " + result.Messages.Last()?.Text);
}
This gets us a working copilot quickly, with great trace visibility from logging.
What you should notice
If you ask:
- “What is the best Star Wars movie?”
- “What is the worst?”
…the second answer won’t understand context from the first question.
That’s expected. LLMs are stateless unless you provide prior messages.
And that’s exactly where lesson 2 goes next.
Suggested banner prompt
A cinematic, retro-futuristic illustration of a developer at a terminal in a starship cockpit, chatting with a glowing AI hologram. Neon blues and warm amber instrument lights, dramatic depth, hopeful mood, high detail, wide composition, no text, no logos.
Follow along
Workshop source for this lesson: Lesson 1 README.
Next up: chat history, message roles, and turning this into a true conversation.
Note: Original workshop repository: jimbobbennett/StarWarsCopilot.
