Skip to main content

Building Semantic Kernel Agents with Model Context Protocol (MCP) Plugins in Python

· 16 min read
Justin O'Connor
Founder @ Onward Platforms

Effective AI systems require more than just conversational abilities—they need the capacity to perform actions and access external resources. This guide explores the integration of Microsoft's Semantic Kernel (SK) with the Model Context Protocol (MCP) to develop capable AI agents that interact with external systems. By combining SK's orchestration framework with MCP's standardized interface, developers can create versatile AI applications that leverage both internal and external tools. This approach is valuable for building enterprise-grade solutions, from personal productivity assistants that manage schedules to business applications that interface with company APIs. This guide will walk you through the implementation process step by step.

Semantic Kernel with MCP Examples

· 3 min read

This directory contains the working code examples from the blog post "Building Semantic Kernel Agents with Model Context Protocol (MCP) Plugins in Python".

Prerequisites

Before running these examples, make sure you have the following:

  1. Python 3.8+
  2. Install the required packages:
pip install semantic-kernel mcp
  1. Set up your OpenAI API key as an environment variable:
# On macOS/Linux
export OPENAI_API_KEY=your-api-key-here

# On Windows (Command Prompt)
set OPENAI_API_KEY=your-api-key-here

# On Windows (PowerShell)
$env:OPENAI_API_KEY="your-api-key-here"

Example Files

The examples are organized as follows:

  1. 01_basic_agent.py - A basic Semantic Kernel agent with streaming responses
  2. 02_mcp_server.py - A simple MCP server that provides calculator functions
  3. 03_agent_with_mcp_stdio.py - SK agent that uses the MCP server via stdio transport
  4. 04_mcp_server_sse.py - MCP server that uses Server-Sent Events (SSE) transport
  5. 05_agent_with_mcp_sse.py - SK agent that connects to the SSE-based MCP server

Running the Examples

Basic Agent (No MCP)

Run the basic streaming agent:

python 01_basic_agent.py

MCP via Standard Input/Output

To use the MCP server with stdio transport:

  1. Open a terminal and run the agent:
python 03_agent_with_mcp_stdio.py

This single command will automatically start the MCP server as a subprocess.

MCP via Server-Sent Events (SSE)

To use the MCP server with SSE transport:

  1. First, start the MCP server in one terminal:
python 04_mcp_server_sse.py
  1. Then, in another terminal, run the agent:
python 05_agent_with_mcp_sse.py

Example Prompts

Try these prompts with the Math Assistant:

  • "What is the sum of 3 and 5?"
  • "Can you multiply 6 by 7?"
  • "If I have 20 apples and give away 8, how many do I have left?"
  • "What is 15 divided by 3?"
  • "Calculate 42 minus 17."

Troubleshooting

  • If you get errors about missing modules, make sure you've installed all required packages.
  • If the agent cannot connect to the MCP server in SSE mode, ensure the server is running and listening on the expected port (default is 8000).
  • For function calling issues, ensure you're using a model that supports function calling (e.g., gpt-4o, gpt-3.5-turbo with function calling).
  • If you see connection errors when exiting the application, this is normal and doesn't affect functionality. The examples use async context managers to properly handle connections and cleanup.

Implementation Notes

  • The MCP plugins are used with async context managers (async with MCPSsePlugin(...) as plugin:) to properly handle connections and cleanup.
  • Both server examples use the generic mcp.run(transport="...") method, where the transport can be "stdio" or "sse" depending on the desired communication method.
  • The SSE client connects to the /sse endpoint on the server (e.g., http://localhost:8000/sse).

Further Reading

For more information, check out these resources:

Migrating Apps to the Cloud

· 10 min read
Justin O'Connor
Founder @ Onward Platforms

App modernization looks different for everyone!

There are no hard-and-fast rules or one-size-fits-all approaches to migrating applications to the cloud. Every organization has its own unique challenges, constraints, culture, goals, budgets, and market pressures that guide them on their journey. There are however some common strategies that have helped organizations successfully move tens, hundreds, or thousands of applications to the cloud.

Using Shadcn UI with Docusaurus

· 13 min read
Justin O'Connor
Founder @ Onward Platforms

Introduction

I love using Shadcn UI for my applications. The components are beautifully designed. Best of all, you can just copy and paste them into your apps. They are accessible, customizable, and fully open source.

I also love Docusaurus for building my documentation sites. It's a great way to create documentation sites quickly that look professional.

The only problem? They are really hard to use together. Because Docusaurus uses its own CSS framework (Infima) and a custom build system, it's hard to integrate Shadcn UI components into your Docusaurus site.

As an engineer, I couldn't let that stop me. I set out to find a way to integrate Shadcn UI components into my Docusaurus site and that is exactly what I'm going to show you in this guide.

Welcome

· One min read
Justin O'Connor
Founder @ Onward Platforms

Hello! I am Justin O'Connor, Founder of Onward Platforms and Director of AI agent development for one of the Big Four consulting firms in the US. I have been building cloud infrastructure and software for nearly a decade and love building products that delight developers and customers.

I'll be sharing updates about our journey, technical insights, and thoughts on the future of developer tools here. Stay tuned for more exciting developments!

Converting Python's requirements.txt into Homebrew formula resources

· 3 min read
Justin O'Connor
Founder @ Onward Platforms

For developers who maintain both Python and Homebrew packages, you've likely found that managing dependencies between the two ecosystems can be a bit challenging. Today, I am going to share a Python function that I wrote that automates the conversion of a Python requirements.txt file into Homebrew resource blocks. This can be a real time-saver, ensuring you have the correct dependencies for your Homebrew formula.

In Python, requirements.txt is the standard way to specify package dependencies with their corresponding versions. In a Homebrew formula, dependencies can be listed as 'resources'. Each resource block defines a package dependency, including its download URL and a SHA256 hash for verification.

Azure Periodic Table of Resource Naming Convention Shorthands

· 3 min read
Justin O'Connor
Founder @ Onward Platforms
HOT UPDATE - August 2023

This resource is now available as an interactive web application at azureperiodictable.com!

The Power of Consistent Resource Naming

Consistency in naming Azure resources is essential whether you are just getting started in the cloud or have a mature team and platforms. Defining and enforcing naming conventions for resources can help improve:

  • Resource discovery and management
  • Cost tracking and optimization
  • Automation and infrastructure as code
  • Organization of resources across Azure tenants
  • Team collaboration and resource identification

A Guide to Getting the Most Out of Agile

· 19 min read
Justin O'Connor
Founder @ Onward Platforms

I love helping teams deliver more application and cloud platform features faster to happier customers. An important component of being able to accomplish this is following an Agile methodology that works.

In this post I will cover how teams can get the most out of Agile principles and practices to deliver products faster and more iteratively. The concepts in this article heavily leverage the Scrum and Scaled Agile ("SAFe") frameworks and are based on my experience delivering large scale development efforts.

Using TFLint Behind a Corporate Firewall

· 3 min read
Justin O'Connor
Founder @ Onward Platforms

There are times where you may not be able to initialize a plugin (aws, azurerm, etc.) for tflint because you are behind a corporate firewall. This can cause failures when running tflint --init.

Being able to lint code locally is helpful to ensure that you are meeting your team's quality standards before you push to the remote repo. This short post outlines how you can get around this issue on Mac.