Use LLM Like a Class Object in Your Script

Have you ever wondered if it was possible to integrate LLM and GenAI in your application directly and straight in the code?
This is important because it enables:
- Creating pipelines powered by LLMs within the project structure,
- Early detection of code smells,
- Code reviews on code compilation,
- Custom actions such as (readme update, architecture checks, etc.),
- Locally powered LLM,
- Seamless integration with any project and code stack,
- Code versioning compatibility,
- As close as possible to deterministic output,
- Integration with standard MCP protocols and servers,
- Custom programmable scripts by the product developers.
This is now possible, now you can integrate LLM in your application and in your workflow there is a new library that has been created by Microsoft recently and a framework called GenAIScript which enable integration between scripts and LLMs. I was really fascinated and impressed when I started working with this framework. It’s straightforward to build pipelines, integrate scripts in your comment line applications and create an instruction file within your project that can be versioned (with git) and reviewed by the team. More importantly it will have very close potential to a deterministic output. Let’s dive into a practical example.
And the following example I’m going to create in a Console application in C# and use this Console application to reverse a string as an MCP server (take a look at my previous article here).
dotnet new console -n MyFirstMCP
The echo tool works as follows
[McpServerTool, Description("Echoes in reverse the message sent by the client.")]
public static string ReverseEcho(string message) => new string(message.Reverse().ToArray());
Despite the simplicity of this tool, it is possible for C# MCP configs to scan this class and use it as a tool in any MCP host.
Within the same project, create a file with the mjs extension (for eg. codequality.genai.mjs) and simply input the following code within
script({
model: "ollama:llama3.2",
files: ["**/*.cs", "**/*.csproj"]
})
const src = def("csFiles", env.files, { endsWith: ".cs" })
$`You are a helpful assistant. do these csharp files in ${src} follow best practices? `
Now notice how simple it is to integrate LLM with a script. The first JSON object includes environment files which are the files that the LLM will consider in its context. After that we are defining a variable that will exclude non-C# files from the context. GenAiScript will then pass the files array on to the prompt. The LLM will interact with the files based on the prompt instructions.
This is a great starting point. Watch the video below for output and if you have any questions please do not hesitate to contact me. I will provide more in-depth tutorials for using GenAIscript in complex scenarios.
If you have a specific use case for integration with GenAiScripts, please discuss it with me.
run the script using the following command.
npx genaiscript run {filename}.mjs
Reviewing the Output
Like with everything with LLMs, it’s very important to critically review the generated output.
While the output is is clearly readable and concise (note that different shades of grey indicate tokens), the overall output reveal significant concerns with LLMs. In my example I used a llama 3.2 which could be outdated by now! I’m running this locally. However, even some online LLMs still lack the knowledge of MCP servers, so it couldn’t figure out the attributes.
It didn’t recognise it at all and therefore, has given me misleading information. This doesn’t belittle how precious this framework is. It just means that you need to proceed with caution and review every LLM output as you use it.