ChatCrafters AI Suite - Documentation

Overview

The “ChatCrafters AI Suite” is a .NET library designed to simplify the integration of the OpenAI and ElevenLabs APIs into .NET applications. The APIs of OpenAI power chat and more, and the ElevenLabs APIs provide excellent text-to-voice services to give AI chat characters unique and realistic sounding voices.

While the library aims to support all OpenAI APIs, the current version focuses on providing a seamless experience for incorporating the OpenAI Chat API.

Features

  • Chat Integration: Easily add conversational AI to your applications with support for OpenAI’s Chat API.
  • .NET Compatibility: Developed for the .NET ecosystem, ensuring seamless integration with your existing .NET projects.
  • Extensive Documentation: Full documentation will be available and there will be full code examples on the website.
  • Simplicity: Designed with simplicity in mind, enabling developers of all skill levels to leverage AI in their applications.
  • Future Support: Planned expansions to include full support for all OpenAI APIs.

Supported OpenAI Chat Models

The “ChatCrafters AI Suite” supports all the Chat models currently supplied by the OpenAI API. As you can see, there are eight model names but only five unique models. The models “GPT4 Turbo Preview”, “GPT4”, and “GPT3.5 Turbo” are continuously updated by OpenAI to point at the newest stable editions of each of those base models. Developers looking for stability should use these model names in code, developers with more specific needs might want to select one of the others.

  • GPT4 0125 Preview
  • GPT4 Turbo Preview – Currently points to GPT4 0125 per-OpenAI
  • GPT4 1106 Preview
  • GPT4 – Currently points to GPT4 0613 per-OpenAI
  • GPT4 0613
  • GPT3.5 Turbo 0125
  • GPT3.5 Turbo – Currently points to GPT3.5 Turbo 0125 per-OpenAI
  • GPT3.5 Turbo 1106

Prerequisites

  • .NET 8.0
  • OpenAI API Key
  • OpenAI OrgID (optional)

Installation

Install the ChatCrafters OpenAI Kit via NuGet Package Manager:
				
					Install-Package ChatCrafters.ChatCrafters_AI_Suite.1.0.0 -Version 1.0.0
				
			
Or via the .NET CLI:
				
					dotnet add package ChatCrafters.ChatCrafters_AI_Suite.1.0.0 --version 1.0.0
				
			

C# Bare Bones Example

Here’s a stripped down bare bones example of using the “ChatCrafters AI Suite” Chat Client:
				
					using ChatCrafters_AI_Suite.Chat;

namespace ChatCrafters_Chat_BareBonesExample
{
    internal class Program
    {
        static void Main(string[] args)
        {
            // Replace "your_api_key_here" with your actual OpenAI API key
            var apiKey = "your_openai_api_key_here";
            var orgID = ""; // Optional: Enter your OpenAI Organization ID if you have one

            var chatClient = new ChatClient(_openAI_Key, _openAI_OrgID);
            List<MessageBase> messages = new List<MessageBase>();

            string userAsk = "When was Star Wars: Ep 4 released?";
            Console.WriteLine($"User: {userAsk}");
            messages.Add(new UserMessage(userAsk, ChatMessageRoleType.User));

            var chatRequest = new ChatRequest
            {
                Messages = messages,
                Model = ChatModels.ModelInfo.GetModelProperties(ChatModels.ChatModel.GPT3_5_Turbo).EndpointName
            };

            var (chatCompletion, finishReason) = chatClient.GetChatResponseAsync(chatRequest).GetAwaiter().GetResult();
            Console.WriteLine(chatCompletion?.Choices?.Last().Message?.Content);

            HitAnyKeyToContinue();
        }

        static void HitAnyKeyToContinue()
        {
            // Clear the input buffer, if any
            while (Console.KeyAvailable)
            {
                Console.ReadKey(true); // true = hide input
            }

            // Show prompt
            Console.WriteLine();
            Console.Write("Hit any key to continue...");

            // Wait for any input
            Console.ReadKey(true);
        }
    }
}

				
			

Typical Output

				
					Star Wars: Episode IV - A New Hope was released on May 25, 1977.
				
			

Bare Bones Notes

This “Bare Bones Example” is provided for those of you that only want to see exactly what’s needed and nothing else. While I tried to keep the code as short as possible, I included the full “namespace” and “Main” blocks to ensure that junior developers see everything.

 

In this example, just inside the “Main”, you first need to save the OpenAI API Key (and optional OrgID) to the variables. We initialize a Chat Client and List for chat messages. The List of messages will hold the conversation history as chat occurs and this List is sent along with each request to the chat model.

 

Then we define the “user ask”. This is a generic term I’ve seen used for AI input, so I’ve continued its use here. Of course, this would be gathered from the actual user with the UI in a real program. The “user ask” value is then printed to the UI and added to the conversation via the List of messages.

 

A Chat Request is then created with this conversation history List and by specifying a chat model. Note that the model used in this example is “GPT3.5 Turbo”; this is used here to maximize speed. The ChatCrafters AI Suite currently supports the five unique chat models currently offered by OpenAI through their API.

 

At this point the Chat Request object is submitted to the Chat Client using the basic “bulk response” method, “GetChatResponseAsync”. This method returns two separate variables. The first one is a Chat Completion object that holds a lot of data about the response. The second variable, “finishReason”, is a string containing the reason why the model stopped responding. The basic value is “stop”, which means the response came to a natural end. Other values for “finishReason” indicate things like account token count is low, or response ran out of space in the Context Window. More on these later.

Side Note: .GetAwaiter().GetResult()

In this example, the get response method “GetChatResponseAsync” is called with the additional methods “.GetAwaiter().GetResult()”. This is necessary because the enclosing method (“Main” in this case) is not set as “async”. Using them here allows us to synchronously wait for the asynchronous method. Note that this is fine in this example but is not always a good idea to use.

In this “Bare Bones Example”, I allowed the code to assume that the Chat Completion, Choices, and Message are not null (more on these later). The code simply prints the “Content” directly to the UI.

 

Lastly, I went ahead and included some simple code I’ve used for years that prints “Hit any key to continue…” to the UI and waits for any key press. This especially helps beginners see the program because Visual Studio’s default settings will auto close the window the moment the program exits.

C# Example with Comments

Here’s more complete example to get you started with the “ChatCrafters AI Suite” Chat Client:
				
					using ChatCrafters_AI_Suite.Chat;

namespace ChatCrafters_Chat_BareBonesExample
{
    internal class Program
    {
        static void Main(string[] args)
        {
            // Replace "your_api_key_here" with your actual OpenAI API key
            var apiKey = "your_openai_api_key_here";
            var orgID = ""; // Optional: Enter your OpenAI Organization ID if you have one

            Console.WriteLine("Chat Test\n");

            // Initialize a new ChatCrafters AI Suite ChatClient
            ChatClient chatClient = new ChatClient(_openAI_Key, _openAI_OrgID);

            // Start a new list to hold the converstation (Message Type can be "System", "User", or "Assistant")
            List<MessageBase> messages = new List<MessageBase>();

            // Set a user question for the AI model
            string userAsk = "I just got a black and white cat. Please give me 5 cute name ideas for my new cow-cat.";

            // Display User Ask to the UI
            Console.WriteLine($"User: {userAsk}");

            // Add the user ask to the conversation history
            messages.Add(new UserMessage(userAsk, ChatMessageRoleType.User));

            // Create a new ChatCrafters AI Suite ChatRequest object
            ChatRequest chatRequest = new ChatRequest
            {
                // Add the entire converstation history to the request
                Messages = messages,

                // Use GetModelProperties to use the proper End Point Name (GPT3.5 Turbo is used here for speed)
                Model = ChatModels.ModelInfo.GetModelProperties(ChatModels.ChatModel.GPT4_Turbo_Preview).EndpointName,

                // Set the level of creativity (0.0 - 2.0) (0.8 seems to be a good default)
                Temperature = 0.8
            };

            // Minimal error catching
            try
            {
                // Submit the chat request to the client
                (ChatCompletion? chatCompletion, string finishReason) = chatClient.GetChatResponseAsync(chatRequest).GetAwaiter().GetResult();

                // Check for content
                if (chatCompletion != null && chatCompletion.Choices != null)
                {
                    // There should be only one Choice, but this ensures all AI output is displayed
                    foreach (Choice choice in chatCompletion.Choices)
                    {
                        // Show the AI response text
                        Console.WriteLine($"Response: {choice.Message?.Content}");

                        // Show 'Finish Reason' for this example; this field is not normally shown to users
                        Console.WriteLine($"Finish Reason: {finishReason}");
                    }
                }
                else
                {
                    // Tell the user
                    Console.WriteLine("No response received.");
                }
            }
            catch (Exception ex)
            {
                // Tell the user
                Console.WriteLine($"Error: {ex.Message}");
            }

            // All done
            Console.WriteLine("\nTest Complete\n");

            HitAnyKeyToContinue();
        }
        
        static void HitAnyKeyToContinue()
        {
            // Clear the input buffer, if any
            while (Console.KeyAvailable)
            {
                Console.ReadKey(true); // true = hide input
            }

            // Show prompt
            Console.WriteLine();
            Console.Write("Hit any key to continue...");

            // Wait for any input
            Console.ReadKey(true);
        }
    }
}
				
			

Typical Output

				
					Response: Congratulations on your new feline friend! Here are 5 cute name ideas inspired by their unique cow-like appearance:

1. **Oreo** - Just like the classic black and white cookie, this name is perfect for a black and white cat.
2. **Moo** - A playful and adorable nod to the sounds a cow makes, highlighting your cat's cow-like spots.
3. **Panda** - Named after the beloved black and white bear, this name is ideal for a cat with distinct black and white patches.
4. **Domino** - This name suits a black and white cat wonderfully, reminiscent of the pattern of a domino piece.
5. **Patches** - This name is cute and fitting for a cat with a patchy, cow-like fur pattern.

No matter which name you choose, it's sure to be a perfect match for your new companion's charming appearance!
Finish Reason: stop
				
			

Example with Comments Notes

This example demonstrates the same basic idea as the “Bare Bones Example” while being more verbose with everything. In this example, I tried to explicitly state each data and element type, and comment each step, to fully illustrate everything that’s happening. In addition, I included the full “namespace” and “Main” blocks to ensure that junior developers see absolutely everything.

 

Just like the first example, just inside the “Main”, you first need to save the OpenAI API Key (and optional OrgID) to the variables. That is followed by a quick update to the UI for this example. Then we initialize a Chat Client to use and after that we initialize a Chat Client and List for chat messages. The List of messages will hold the conversation history as chat occurs and this List is sent along with each request to the chat model. This is how the AI can maintain a conversation and build upon it as it goes along. Because the OpenAI Chat models are “stateless”, it is up to us programmers to maintain the conversation in this way.

Side Note: Stateless

This means that each request to the chat models is treated as its own unique thing that has nothing to do with any previous request. If the programmer does not take the responsibility to maintain a “conversation history” and provide it with each new request, then a request to the model is the same as a random stranger walking up to someone and stating a single phrase with little meaning and no context.

 

Depending on the needs of the program, this can be leveraged either way. In other words, if you are developing a typical “chat program”, then the conversation history can easily be given to the chat model. However, if your program is doing something like generating summaries of information (like dense research information for example), you could feed the same instructions to the chat model with new research content again and again without giving it any “conversation history”. This would force it to treat each request for summary as its own unique request, and the model will have no chance to “become lazy” or “change topics” as the conversation continues.

Example with Comments Notes continued...

The List of messages is then initialized with a “MessageBase” object. This is because all message types in the “ChatCrafters AI Suite” derive from this object. The other message types are: “Assistant” for messages from the chat model, “User” for messages from the person using the program, “System” for messages that are inserted by software for the model, and “Tool” messages for handling function calls. Remember that any value not added to the list of messages will be completely unknown to the chat model.

 

As a last note about the List of messages, this “conversation history” also becomes important when the length of the conversation becomes long. Be aware that the different chat models offer different sizes of “Context Window”. The Context Window is the grand total allowed tokens for a request: grand total means the tokens taken by the request itself, tokens used by any tools (functions and their descriptions), tokens used by the response by the model, and tokens used by the entire submitted conversation history. Be aware that this “grand total” can be reached in any number of ways including a huge request with a small response, a small request with a huge response, or even a small request with a small response but with a huge conversation history. It is up to the programmer to maintain the conversation history itself, as well as the length of the conversation history, to ensure everything stays within the “Context Window” for every request.

 

Then we define the “user ask”. This is a generic term I’ve seen used for AI input, so I’ve continued its use here. Of course, this would be gathered from the actual user with the UI in a real program. The “user ask” value is then printed to the UI and added to the conversation via the List of messages. Lastly, in a larger program, this user input would be gathered from the UI, possibly cleaned and/or trimmed for extra spaces, and checked for length, before considering the value both saved locally and safe to use.

 

The string value of the “user ask” is then displayed to the UI. In a larger program, this could be the moment that the “chat input” area is cleared and the “user ask” is displayed in the “chat window” that shows the on-going conversation. The “user ask” is also officially added to the conversation history List for the chat model to find on the next request to the Chat Client.

 

Now we create an official Chat Request object to submit to the chat model. Here we set the conversation history in “Messages”, select the chat model to use with “Model”, and set the “Temperature” for the request. Note that the “ChatCrafters AI Suite” offers many more options in the Chat Request object. Also, notice that the chat “Model” being used can be set for each individual request. There may occasionally be times when a programmer wants to change which model is being used while maintaining the “same conversation” and this makes that possible.

Side Note: Chat Request – Temperature

The setting of “Temperature” in the Chat Request object is possibly the most important setting when working with OpenAI Chat.

 

This is known more specifically as “sampling temperature” and the possible values are 0.0 (zero) to 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make the output more focused and deterministic.

 

OpenAI states that the default value for this is 1.0, but I believe that the standard public ChatGPT uses 0.7. In my personal experience when it comes to “chat bots”, 0.8 to 1.2 are the best values to use, with 0.8 making the responses slightly more serious and 1.2 being good for more expressive character types.

Example with Comments Notes continued...

Now we move into the Try block to get some work done. Because we’ve already packaged up everything earlier in the Chat Request, we can simply submit it to the Chat Client using the “bulk response” method “GetChatResponseAsync”. This method returns two separate variables. The first one is a Chat Completion object that holds a lot of data about the response. The second variable, “finishReason”, is a string containing the reason why the model stopped responding. The basic value is “stop”, which means the response came to a natural end. Other values for “finishReason” indicate things like account token count is low, or response ran out of space in the Context Window.

 

The Chat Completion object has properties that can hold tool calls (function requests), data about token usage and more, but for this example we focus only Chat Completion “Choices”. The chat models can respond with more than one edition of “response text” in a single Chat Completion. There may be times when this is useful, but when using the chat model as a chat bot there will only be one choice in each Chat Completion.

 

When using the basic “bulk response” chat (as opposed to “streaming response”) and with no tools, the “finish reason” will nearly always be “stop”, meaning that the chat model succeeded in generating all the text required in the response. In addition to other uses, when using the “streaming response” method the “finish reason” holds values indicating whether the current streamed chunk is “the end” or not.

 

Finally, we make sure there’s content in the Chat Completion and display all results (again, should be only one) to the UI. The other bits of code marked “Tell the user” exist only to gracefully display unexpected runtime errors (cannot connect to internet, etc.).

 

Lastly, I went ahead and included some simple code I’ve used for years that prints “Hit any key to continue…” to the UI and waits for any key press. This especially helps beginners see the program because Visual Studio’s default settings will auto close the window the moment the program exits.

Final Notes

Using the “ChatCrafters AI Suite” and the example with comments above, it’s already possible to build a basic .NET chat bot into any Console, WinForms, WPF application or website. The raw API from OpenAI has some complexities, but they have made it possible to leverage their AI models using any language or platform, and the “ChatCrafters AI Suite” makes it easy to leverage the power of OpenAI in any .NET project.

 

The “ChatCrafters AI Suite” supports all the Chat Models currently supplied by the OpenAI API. You may notice that the list of Chat Models has eight model names but only five unique models. The models “GPT4 Turbo Preview”, “GPT4”, and “GPT3.5 Turbo” are continuously updated by OpenAI to point at the newest stable editions of each of those base models. Developers looking for stability should use these model names in code, developers with more specific needs might want to select one of the others.

 

The “ChatCrafters AI Suite” is built with .NET 8.0, so that’s the main prerequisite for coding with the library. The OpenAI API Key is needed to run any of the code shown here. Other than those items, be aware that all documentation here is written with the assumption that Visual Studio is being used for development, because that’s what I use.

2024 Chat Crafters, LLC