S08E05 - Building the Future of APIs: Mike Kistler's Insights on OpenAPI and MCP
Sponsors
Support for this episode of The Modern .NET Show comes from the following sponsors. Please take a moment to learn more about their products and services:
- RJJ Software’s Strategic Technology Consultation Services. If you’re an SME (Small to Medium Enterprise) leader wondering why your technology investments aren’t delivering, or you’re facing critical decisions about AI, modernization, or team productivity, let’s talk.
Please also see the full sponsor message(s) in the episode transcription for more details of their products and services, and offers exclusive to listeners of The Modern .NET Show.
Thank you to the sponsors for supporting the show.
Embedded Player
The Modern .NET Show
S08E05 - Building the Future of APIs: Mike Kistler's Insights on OpenAPI and MCP
Supporting The Show
If this episode was interesting or useful to you, please consider supporting the show with one of the above options.
Episode Summary
Mike Kistler discussed his work on the Model Context Protocol (MCP), an emerging standard for building web APIs. He explained that MCP is based on OpenAPI, which has become increasingly popular as a tool for defining APIs. However, Kistler argued that MCP offers more capabilities and flexibility than OpenAPI alone.
Kistler highlighted one of the key features of MCP: its ability to handle multiple protocols and languages in a single API. This means that developers can write APIs that can be consumed by multiple programming languages, without having to rewrite the API code for each language. Additionally, MCP allows for more flexible data modeling and schema definition, making it easier to build robust and scalable APIs.
One of the most exciting aspects of MCP is its potential to enable agents and other kinds of automation in web development. Kistler described how developers can use MCP servers as a kind of “agent” that can perform tasks on their behalf, without having to write custom code for each task. This could have significant implications for industries such as finance and healthcare, where automating manual processes is crucial.
Kistler also talked about the importance of Azure Functions in hosting MCP servers. He noted that Azure Functions is a pay-as-you-go service, which makes it an attractive option for developers who want to build and deploy MCP servers without having to worry about upfront costs or maintenance responsibilities.
Throughout the conversation, Kistler emphasized the potential of MCP to transform the way we build and consume APIs. He encouraged developers to learn more about MCP and get started building their own MCP servers, which he promised would be amazing experiences. The interview ended with a lighthearted remark from Jamie, the host, who joked that Mike’s suggestion to create an MCP server for the podcast was a great idea - even if it might inspire some chaos in the world of web development.
Episode Transcription
And we talk about that contract. We say, “this is your contract. This Open API definition that you have is the contract for your service.” And in the end, that’s how customers interact with Azure is through APIs. And so it’s important to have that contract so that customers know how things work, how to use them, hopefully how to use them easily, right?
Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I’m your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.
Today, we’re joined by Mike Kistler to talk about two topics (we usually only tackle one topic per episode, so you’re getting a bonus with this episode): Open API and both MCP and the MCP SDK for C#.
We started our conversation by focussing on Open API, as this is a passion of Mike’s. We talked about what it is, how you’ve likely already been using it with any ASP .NET Core WebAPIs that you’ve worked on, and how the latest versions of ASP .NET Core can generate a lot of the Open API specification for you without having to add lots and lots of metadata an attributes.
Pro tip: If you’ve been using the Swagger UI in your applications, you’ve been using Open API.
And when the LLM decides that it wants to use an MCP tool or access an MCP resource, it doesn’t go and do that directly. It comes back to the MCP host and asks the MCP host to call a tool with a particular set of parameters, or to access an MCP resource. And at first, when I saw this in the MCP architecture, I thought, “boy, that’s clunky. Why not have the LLM just call these things directly?” And there’s a deliberate reason why it was done this way.
We then pivoted over to talking about MCP (or Model Context Protocol) which is a rapidly evolving standard for creating your own agents and applications which can communicate with or be instructed by, LLMs. We talked about how the MCP standard works, and how the standard is written in such a way that there’s always a human in the loop. We also talked about how you can build your own MCP servers using the MCP SDK for C#.
It’s worth pointing out that both MCP and Open API are evolving standards. While Open API tends to evolve with a much more relaxed pace, the MCP standard (having not even reached a year old when we recorded) uses the date as it’s version number. And Mike actually references the latest version of the MCP spec in our conversation, which will give you a clue as to when we recorded it.
Before we jump in, a quick reminder: if The Modern .NET Show has become part of your learning journey, please consider supporting us through Patreon or Buy Me A Coffee. Every contribution helps us continue bringing you these in-depth conversations with industry experts. You’ll find all the links in the show notes.
So let’s sit back, open up a terminal, type in dotnet new podcast and we’ll dive into the core of Modern .NET.
Jamie : So Mike, welcome to the show. It’s been, oh my goodness, it’s been months since we last talked. We talked at MVP Summit and I’ve I’ve really been looking forward to this conversation
Mike : And me too, Jamie, me too. Thanks. It’s great to be on your show.
Jamie
:
Thank you very much. I really appreciate you saying that.
So For the folks listening in, would you mind giving them a really quick brief intro of yourself and the kind of stuff that you work on, that you can talk about, and all that kind of stuff?
Mike
:
Sure, sure. Happy to do that. So my name’s Mike Kissler. I’m a principal product manager at Microsoft. I’ve been in the. NET team coming up on two years now. Before that, I worked in the Azure SDK team. I joined Microsoft about four years ago. And before that, I worked at IBM for a long, long time.
My main interests and background are in REST APIs, HTTP APIs, and Open API descriptions of those REST APIs. And then the tooling that is driven from Open API, doing things like SDK generation and other things. That was kind of what got me into . NET when I was looking for a new opportunity. The. NET team has some great capabilities for generating Open API. from ASPNet Core web APIs, and I’ve been working on that basically since I got onto the team. Bunch of folks there have sort of preceded me on that Safia Abdallah, one of the lead engineers on that, working very closely with her and and the rest of the folks.
Jamie : Cool. Yeah. Yeah, we’ve chatted with Safia not very not that long ago actually. Actually it will be two weeks uh af uh before this one comes out. So we’ve had Safia on talking about her work. And yeah, the Open API stuff, like, I remember building REST APIs without any kind of documentation around them, nothing like that. And the Open API stuff like For folks who are just getting started, it is mind-blowing just how much stuff you can generate from those documents.
Mike : Yep.
Jamie : It’s really, really cool. You know, because back in the—I don’t want to say, “back in the dark ages,” but you know—when I started, there was no such thing. And you know, if you needed to do a POST request to test your API endpoint, you needed to get specific tools. Whereas if you’re using Open API and, say, something like Swagger or one of the other many, many other front ends, you could do it all in the browser, which is just amazing.
Mike : Yeah, yeah. No, it it’s great. And there was a time when there were all these different competing ways to describe REST APIs. You had Open API, but you also had API Blueprint and you had RAML and all these other things. And all those other ones have pretty much fallen by the wayside. Everybody’s you know, coalesced on using Open API, it’s really become the the standard way to define HTTP APIs.
Jamie : Absolutely. I’d love to talk a little bit more about that because I have some interesting thoughts about like, we’ll come on to it in a minute, but like I create an ASP. NET Core web API And some magic happens and I get a UI out of the other side of it for me to use to debug it, right? There’s maybe swagger or it’s maybe something else.
Mike : Yep.
Jamie : But there’s some magic involved in taking my API endpoints, creating some kind of middle step, which is what we’re talking about, the Open API documentation, then suddenly a UI pops out the other side, right?
Mike : Yep.
Jamie
:
And that is just it’s really cool that I don’t have to do that because I love not having to do stuff. I genuinely do.
But then I also know that I can not just create that sort of front end through—again, I keep using Swagger, but it’s the one that I know, but there’s lots of others—but I can also hand that off to say like a front-end engineer and they they will know exactly what’s going on with my API. So it kind of acts like a contract, doesn’t it?
Mike
:
It’s a contract, absolutely. And we talk about that quite a bit, because in addition to what I’ve talked about, I’m on the Azure API. stewardship board which reviews all of the data plane REST APIs that go out. And we try to make sure that they’re designed for consistency so that all of the REST APIs in Azure have sort of a consistent API patterns, and so developers can go from one service to another and understand how things work.
I’m also on the Breaking Changes review board, which reviews the REST APIs to make sure that they aren’t breaking when you go from version to version. And we talk about that contract. We say, “this is your contract. This Open API definition that you have is the contract for your service.” And and in the end, that’s how customers interact with Azure is through APIs. And so it’s important to have that contract so that customers know how things work, how to use them, hopefully how to use them easily, right.
Jamie
:
Absolutely. Creating that contract is not so easy. Again, I remember back in the earlier days of my uh web dev history, where I’d create a bunch of APIs, like a a web API for something or other. Before microservices was a thing, before backend-for-frontend was a thing. I’d just be creating the back end and then I’d hand over some work to the front end developer who’d then go, “cool. So what’s the API that I need to call and how does it work? What data are you expecting?"
Let’s talk about that for a couple of minutes then, right? So it’s like I know from what you’ve said earlier and from our conversations, the Open API is one of the big passions for you. So like, I guess, let’s dial it back a bit. Let’s start from the beginning. What is Open API? And I mean it’s being leveraged for me automatically in ASP. NET Core, but what if I had to write my own? Is that something I could do?
Mike
:
So that is something that you could do.
One of the big debates right now that that it’s actually been going on for a long time about how you build API projects is: do you design your API first or do you write your code and then extract the API definition from the code? You know, I used to be much more of a design first. I used to believe in the design first philosophy uh more than I do now. I still think it’s very important to pay very close attention to your API design. And so when you were doing design first, you would write the Open API by hand or you would use some tools to generate it. And you know, you can do that. And then you have an Open API and then you have to figure out how to build an implementation that matches that.
You know, the perspective that I’ve come to now with ASP .NET Core is you can have a very intentional Open API definition, produced from your code, you can you can be as careful about it as you want in terms of describing the API the way that you want it to work. But the beauty of doing it by generating it from your code is that you then know not perfectly, but but very, very high confidence that the API definition matches the implementation. And that’s one of the things that we see quite commonly that when people design their API first separately and then implement it, they don’t have that very close connection, that close correlation between the design and the implementation. So yes, you can write your Open API yourself.
You asked, what is Open API? Open API is just a description language for HTTP APIs. It’s typically written in JSON, although you can write it in YAML. There’s sort of a divide in the community about whether JSON or YAML is better. And it just goes through and it describes things about your API, like what are the endpoints? What parameters do those endpoints take? What request bodies do they take and what do the response bodies look like? What’s the authorization schemes that are required? All those things.
And and we can extract a lot of that information from your ASP .NET Core application. You know, we know the types that you’re consuming and producing from your APIs, and we can turn those types into JSON schema, which is the way things are described in Open API. So yeah, so it’s a very nice system, I think.
Jamie : Yeah, and uh and and I guess I kinda hinted at it as well, right? I can I can give that to another engineer and say, “here’s my API design. This is what it looks like. This is what I expect. Go build a UI for it or go build something else that talks to it.” And because it matches what I’ve written or if I’ve designed it, I can then hand it off to someone whilst I build it. Like that that kind of increases my confidence that anyone calling that API is gonna get it more correct than had I just said, “the API exists. Good luck.”
Mike
:
Right. And it’s amazing all of the different things that you can generate off of Open API. You can generate documentation. You can generate the UIs that you that you talked about for doing sort of ad-hoc testing. You can also generate actual tests, right? Real integration or unit tests for your APIs. You can generate SDKs, client libraries. And there’s lots… that that’s actually kind of how I got into the Open API world is you know, I learned how to build client libraries, SDKs, from an Open API spec. And what was beautiful about that is you would get your best practices for those SDKs baked in from your generator. And you just had to build the generator to do the right thing and then point it at Open API specs and you would get great client libraries.
Terraform, you know, all the infrastructure as code tools that we have, they can be generated from Open API. Command lines, the Azure CLI (is) generated from Open API. So it it’s really very, very powerful.
Jamie : Yeah. I didn’t realize that you could generate sort of CLI stuff from Open API. That’s That’s genuinely impressive. And it makes sense, right?
Mike : Yep.
Jamie : If you understand how to read and parse Open API, you can then take… I’m worried of how I’m about how I’m about to say it, you can take away the toil of architecting those APIs… that sort of CLI or SDK or documentation or that kind of thing, right? You still have to build the thing, but you can partially automate it, right?
Mike : That’s right. And you can decide what your best practices are. You can bake those into the generator, and then you just get those every time.
Jamie : That’s really cool. Yeah, oh my goodness. Now that that opens up a whole world of possibility because for, genuinely, I’ve only ever used… like I’ll go “file, new project, web API.”" At the moment I’m using Swagger/Swashbuckle and then suddenly a UI appears. But if I can build my API or design it first. and then give that JSON/YAML, whichever one I choose, to whomever, and that just automates their journey as well. That’s a productivity multiplier right there, right? That’s your 10x engineer.
Mike : Absolutely. Yep. Yep.
Jamie : Cool. Okay. So you you’ve talked about how open Open API… and I do this all the time. I say “Open AI,” I mean “Open API,” and I say “Open API” and I mean “Open AI”. Naming thinks it’s hard, right?
Mike : Indeed, indeed.
Jamie : I have my Open API document and I can generate a whole bunch of stuff from it. And you said that you can also write your Open API document and generate the API, perhaps just the fascia, wrong word, the facade for it.
Mike : Yep.
Jamie : That presentation layer, I guess. You can build that because that’s not necessarily the difficult bit. The difficult bit is the business logic behind the API, right?
Mike : Absolutely. Yep. Yep.
Jamie : I do remember, I’m just remembering now when I started putting again, I’m gonna I’m gonna use Swashbuckle as the example. So I’m gonna say Swashbuckle and Swagger a lot And in the. NET space they’re kind of the same, aren’t they? One’s an instance of the other. Or or an implementation, sorry, of the other, I I believe. But like when I started playing around with Swagger or Swagger UI and or Swashbuckle, whichever one is the source of truth. I noticed I had to add loads and loads and loads of attributes to my code and I was like, “okay, so this produces this. And this is an acceptable MIME type. And this is my, like, it will produce this kind of response, a 200 or a 500,” or all that kind of stuff. And I’ve noticed that I kind of don’t need to do as many of those now. That’s. NET and Open API taking away that uh complication for me, right?
Mike
:
That’s right. That’s right. And what you’re talking about there is how much metadata can we extract from the actual like real parts of your program to put into the Open API versus how much do developers have to add sort of extra?
So There are attributes you were talking about, like ProducesResponseType, which is a way of saying, “my endpoint will produce this type of response. This is what the response body will look like for this status code or whatever."
With Minimal APIs, which is really the the foundation of modern. NET web APIs now, there’s a new capability, a new feature called TypedResults. And when you return TypedResults from your endpoint, that allows the framework to capture, “this is what this endpoint is returning,” you know, with a type and the status code, and all that kind of stuff. And when you use TypedResults, you don’t need to use those ProducesResponseType attributes, we can extract it from the code.
And what’s beautiful about that is it’s now coming from the code. It’s coming from what the code is actually doing as opposed to something that the developer, you know, sort of annotated on top, which could be wrong. Or it might have been right when they wrote it and then it became wrong later because somebody changed the code and didn’t change the attribute. So using things like TypedResults really ties the Open API definition that’s generated with the way the code actually works.
Jamie
:
Right, yeah. Because that was going to be one of the things that I took… because I had two points about all of those attributes and all that metadata. The first thing, when they first came out, a whole bunch of colleagues of mine were like, “but I shouldn’t have to add all of this stuff that isn’t really about the, for instance the controller or the endpoint, to the endpoint because then that ties me to that particular technology.” And I said to them, “yes, but you’re not tying yourself to the front-end technology, you’re tying yourself to the Open API.” See, I just say that again. “The Open API technology,” right? Which is a good thing, right? It’s an open standard. It’s there for everyone to look at, you know, it’s widely known.
And the other thing was exactly what you just had what you just said there was like: I’ve created an API, initially, let’s say it was a POST—we’ll ignore the argument of whether it should be a POST or a PUT for create, let’s just put that to one side. I created a POST and it would create or update a record of some kind, and it would return a 201 for created. Brilliant. And then someone’s come along and changed it to produces a 200. Okay, rather than, “I’ve created it and here’s the URL.”
Mike : Uh-huh.
Jamie : And then they’ve gone and, like you said, they’ve gone and forgotten to update those attributes. So the Open API spec that comes out the other side says it’s going to create a 201, whereas it’s actually running it, it creates 200. And, it’s those tiny things that can easily change, that we can miss, that make it difficult to debug an issue. And those things can slip all the way through unit tests and integration tests, hopefully not integration tests. But they can slip all the way through all sorts of different tests and get out into production world, and then people are saying, “you’re saying you should be giving me a 201, but actually it’s coming back as a 200. That’s a bug. What’s going on?”
Mike : Yep. Yep. We see that actually quite quite frequently in the Azure APIs, and we have to go fix the… once it’s out in the world and in the GA API, unfortunately, you have to go fix the API definition to match the behaviour rather than do it the other way around. Because people are are driven to what the API actually does versus you know how it’s documented.
Jamie : Yeah. Yeah. And regardless of who It is that’s creating the API, whether it’s a Microsoft one or whether it’s a Jamie one or whether it’s a whomever, right? Once you’ve set the I guess the standard, I guess is maybe not the right word, but it’s the word I’m going to use. Once you’ve set the standard of how an API should behave and people start integrating against it, there’s not a great deal you can do, right? You can say, “oh, there’s a V2 and you’ll need to change all of your stuff.” But if, “change all your stuff,” means re-release a new version of the app that maybe no one works on any more, you can’t really do that, right? It’s not so easy.
Mike
:
No. Yeah. I mean this is, you’re you’re getting into the area of the breaking changes stuff that we work on. And we have to sometimes remind the service teams, you know what our contract is, what our desire is as far as APIs that have been, you know, made GA in Azure. The desire is that those work forever, right?
Once you make an a an API GA, customers start building their apps on them. And as you say, they might build an app and then that app is done. It’s released in the customer’s production environment. And the development team goes away and it just continues to run, and and it should just continue to run for as long as the customer wants to run it. We shouldn’t force the customer to have to make changes. So Yep, that’s very important.
And, you know, we talked about the tooling aspect earlier. This is another thing that we actually use Open API for is detecting these changes. When (a) service comes along and says, “I have a new API version,” we have tooling that will compare that new API version, that new Open API to the previous Open API. And we will look for things like, “did you add a new required parameter?” Well, if you did, then that’s going to break existing users. “Did you remove a property from a response.” That’s going to break customers. And so once again, we use Open API, the tooling that we can build with Open API to facilitate these important processes.
Jamie
:
Yeah, see that was gonna be the next question I was gonna ask, and I’ll come back to that question in a minute.
But, like, I’m getting flashbacks of uh for for the devs who’ve been around long enough You know, folks didn’t just always use web APIs, they used operating system level APIs. You know, you might talk to the Win32 API and say, “draw me a window please, and give me a handle to that window so that I can do things with that window. And get messages from it, and change the font or whatever.” And obviously if that API version, if that API changes between different versions of the operating system you’re using, that means that your app no longer works on the new operating system, right?
And so I can imagine that because we’re in a web world, the feeling is, “let’s innovate really, really quickly. Whoops, we’ve,” I’m not saying Microsoft do this. I’m saying everybody does this, right? Lots of people in lots of teams do this. I’ve seen it with a number of my customers where it’s like, “quickly innovate. We’ll add this. No, now we need to remove this property. Now we’ll change the property name. Now we’ll change the property type.” And I’m like. “you do realize people are using this stuff, right? Needs to be a little bit more stable than that.”
Mike : Yep. Yeah.
Jamie
:
Oh my goodness.
So my question was going to be, can I, because you said earlier on about you can use the Open API documentation, I guess, that you generate for your API to generate tests. And then you’ve just said there that you will then compare the Open API documentation that is generated for, let’s call it, version 1 of an API, and then compare it to the documentation that is created for say version 2 of the API—or a version two proposal of the API. And then you’ll compare the Open API documentation across them all. But this may seem like a latent question, but can I use the Open API documentation to test that my API is correct, right?
Because like you said, it might be that, I you know you mentioned earlier on, “I’ve added this API endpoint and it used to return a 201 but now it returns a 200.” I can bake that into my tests, right? And say, “oh, did we update the documentation or should the documentation always be static and now the API is changed for all the tests?”
Mike
:
There are certain things that you can test with tests generated from the Open API document.
You said, “can I test that my API is correct?” Unfortunately, that gets into the whole business logic aspect. So for example, if you have a filter parameter, which many Azure APIs do, that will let you select which things come back and which things don’t. That’s awfully hard to test just by looking at the Open API document. But the things that you, you know, whether it’s a 200 or a 201. Absolutely. Whether certain properties come back in a response, that you can test. Whether certain parameters are respected in the in the request. Those things you can test.
One of the things that I that I did, I can’t remember which conference this was, but I showed generating an Open API document for an API service that I had in in ASP .NET Core. And then Telling Copilot, because we’re in the world of AI now, Jamie. I’m sure you know that. You can you can ask Copilot, “look at this Open API and generate a set of tests for me,” and it’ll do it. It’ll do it. It will test for things like, “did you get the right response code?” right.
One of the big things that we did in .NET 10 for Minimal APIs is we added support for validation of inputs, which is a big feature. That’s a really important feature because validating inputs is important for security. It’s important for the proper operation of your API. That capability is now there in Minimal API for .NET 10.
And the validations, the the attributes that you put on things to say, “I want you to validate that this string is no longer than 50 characters,” for example, those things show up in the Open API that gets generated. And then Copilot or whatever can say, “okay, well, that’s something that I should test. I should send it a string with 51 characters and make sure that it fails.” So we are just continuing to get better and better at describing the behaviour, at least at that, you know, at the surface, at that facade, as you said, of the API, describing that behaviour in the Open API, and then using that in all of the downstream tooling for benefits.
Jamie
:
Yeah, and that makes sense, right? Because if I, as a developer of an API, want to test my business logic, I maybe don’t want to have something that is talking to the facade to figure out, “reach into that business logic and figure out what does that look like and how should that respond? Now I can write a test. Or write something that talks to the facade to test that business logic.” I want to maybe do that myself because there will be all sorts of different minutia, and all sorts of different business rules about why that particular business logic is written that way, right?
And I think it was a conversation that I had with Jason Taylor a while back when we talked about the Clean Architecture and he said essentially, “you can write your integration tests, you can write your tests to test the facade. But that’s not the actual,” if you’ll excuse the expression, “the meat and potatoes of the app,” right? The app is the business logic.
Mike : Yep.
Jamie
:
So you want to take your time to actually write the tests for that and make sure that works. As much as the presentation layer, be it a web API or website or whatever, that is super important too. But actually the way that you perform the actions on that input data to get the output data, that’s the super important part, right?
You know, if I look at my car. I want to know that the engine works. I don’t really care so much that the paint work is black or red or orange, right? That is an… it’s an important thing to help me recognize my car. But when I get in it, I don’t care what colour it is as long as it goes forward when I tell it to go forward, right?
Mike : Completely, completely agree. Yep. Yep.
Jamie : Well we danced around API… sorry, we danced around AI a little bit already, and I know that you’ve recently started working with the MCP stuff.
Mike
:
Yes, yes. I missed that, I missed that in my bio up front. Yes, yeah.
So about about six months ago, I guess, I started working on Model Context Protocol and the C# SDK for Model Context Protocol. I’m now the PM for that. And that has been, wow, so exciting because Model Context Protocol has just exploded. There’s just so many things happening with it. And because it’s AI, it’s moving at the speed of light, you know, as you said, you know.
And and MCP was only released in November of last year. It’s not even a year old. It’s a baby. We are just, you know, trying to mature it as quickly as possible. It’s already providing great benefits. But there’s a lot more that we think that it can do and and so we’re working on that.
Jamie
:
Yeah. The thing with AI innovations is that oh my goodness, I remember speaking to Martin Woodward of GitHub. It was last year, so 2024. Early last year, I spoke to him in person and I said, “this AI stuff is moving really fast and I’m watching, I’m making a point of watching it And I feel like I’m being left behind.” So I can’t even imagine what it’s like for y’all on the coalface and for the people around me who aren’t watching it.
What that… that just… I can’t… like my… how do I put it? You know, how we say that nothing can go faster than the speed of light? I’m pretty sure innovation in AI is going faster than the speed of light, right?
Mike : It’s amazing how many people are working on it, how many things are going on. And the benefits that you can get out of it are are really amazing. I mean, I’ve been trying to use it in my own work, right? I will ask copilot to, “generate me this function.” The other day I had it generate some MCP tools for me, because I wanted some tools that did some simple behaviors. And rather than code it myself, I just asked Copilot, “write me an MCP tool that does this.”" And there it was.
Jamie : It is crazy.
Sponsor Message
Today's episode of The Modern .NET Show is brought to you by RJJ Software: strategic technology consulting for ambitious SMEs.
You know me as the host of this podcast, but here's what you might not know: I'm also a Microsoft MVP who's helped businesses from Formula 1 teams to funded startups transform technology from a cost center into a competitive advantage. At RJJ Software, we specialize in three things that matter to growing businesses:
- AI that actually delivers ROI: not hype, just practical implementations that pay for themselves
- Developer Experience optimization: we've helped teams achieve 99% faster deployments and 3x productivity gains
- Strategic technology decisions: from architecture reviews to fractional CTO services
The difference? We don't just advise. We ensure successful implementation through knowledge transfer to your team.
If you're an SME leader wondering why your technology investments aren't delivering, or you're facing critical decisions about AI, modernization, or team productivity, let's talk.
Visit rjj-software.co.uk/podcast to book a strategic consultation.
Now, let's back to today's episode...
Jamie : Before we go further into that, let’s take a step back then. Because I know there’ll be some people listening going, “MC Who? Right? I don’t know who that is,” right? So just re because because obviously it’s like you said, it’s evolving so quickly and it only really came out in November of twenty twenty-four. What in the heck is MCP?
Mike
:
All right. Yep. Yep. Very good.
So Model Context Protocol is a protocol for connecting an LLM or some AI model, let’s say, to tools, which are executable things; things that can do things for you, or resources, which would be like data. And the problem that it’s meant to solve is that LLMs typically are trained on information that’s out on the internet at some period of time, right? And when they’re done training, they get released, but they don’t get retrained as new things happen. So there’s information out on the internet, or elsewhere that the LLM doesn’t know. And how do you get that information into the LLM so that it can act on it? And this this might be information that’s on the web and and and actually a lot of the new AI models know how to go search the web and that’s become a very common thing now for for LLMs. But there might be information that’s not on the web. It might be information that’s inside your company. It might be information that’s, you know, in a particular format that the LLM might not know how to browse. So you can provide that information to the LLM through resources or tools through the MCP protocol.
There can also be actions that you want the LLM to take creating a file or the one that I just spoke about, you know, write write some code for me, you know, build this thing. If you have specific actions that the LLM might not know how to do on its own, but you can provide a tool to the LLM to perform those actions, the MCP protocol is the way to do it.
MCP was originally conceived to work very similar to what they call the the language servers that are used in in GitHub and and many other other editors. that help you do coding, compiling the code, finding errors, you know, maybe suggesting corrections, things like that. There are language servers for all these different kind of models, all these different languages. MCP is meant to operate in the same way. It’s sort of an extension of your AI application, what we call an MCP host, where it provides this extra context to the LLM to do its work.
Jamie : Right, right. So if I pick a really silly example, let’s say I want To build an MCP to help me to build and test my .NET app, right?
Mike : Yep.
Jamie : As long as I can tell an MCP thing, “this is the API for the. NET CLI. This is the API for the. NET test CLI.” And then I can then say to it, perhaps in human language, “here is my repository, or here is my code rather,” because I haven’t told it about Git. “Here’s my code. Build it. Run the tests. And maybe if it’s clever enough, extract all of the test coverage data.” Is that a a kind of example? Would that work?
Mike : I think that would work. I haven’t heard people talk too much about building MCPs for languages, but conceptually, what you said all is possible. You would have an MCP that would know how to do all those things, probably expose those as tools, and then attach that MCP into your, you know, editor of choice; whether that’s VS Code or Visual Studio or some people are still using Emacs. I used to be a big Emacs user, by the way, and then VS Code converted me.
Jamie : I’ve wanted to, because I’ve seen all the cool kids using NeoVim. I’m like that would be nice.That would be nice. But then I have to learn Vim key bindings and and then have to live in the terminal. And I’m like, but uh you know, sometimes I like having a a button that does it for me, right?
Mike : Yes. Yeah. Yeah, yeah.
Jamie : Okay. So just real quick, you said something along the lines of, you know, if the LLM was trained on data that was maybe a few months old; because obviously it takes a while for them to get released once they’ve been… well it takes a while to go from training to released that’s what we’ll say Right.
Mike : Yep.
Jamie
:
And you mentioned about sometimes there’s some information in your uh I don’t think you said this, but I’m gonna say in like your wiki or your corporate documents that it can maybe use to help it to help you. Is that different from retrieval augmented generation then?
So like, just real quick for folks who are listening and who don’t know what retrieval augmented generation is, my description of it is. Like you were saying, the LLM training stops at some point, the world moves on, and then you’re able to say to the LLM, “yes, you have your training data, but also this wiki, or this document, or this thing over here is the source of truth. Go use that.” So is that different from MCP?
Mike
:
Well, it’s different in the sense that retrieval augmented generation implies more than just going and finding resources. The thing about retrieval augmented generation, at least the way that I think about it, is you have a set of data that’s been vectorized. And you can do things like semantic search on that data. So when somebody inputs a prompt, you can say, “okay, now find the data that’s within the corpus of my data that’s semantically similar to the question that I’m asking in the prompt.” And then you extract, you know, out of your big corpus of information that could be thousands of documents, you extract five or ten. And then you take those five or ten and you put them in the context window for the LLM.
The whole point of retrieval augmented generation is you can’t take your whole corpus and put it in the context window. It’s too big. Context windows are have a limited size. And so retrieval augmented generation lets you pick the most important documents to put in the context window so that you get the highest quality result from the LLM.
Jamie : I see.
Mike
:
And you can do that kind of thing, you could do that kind of thing with MCP as well, but MCP is much more general than that. You know, you can use it to extract resources of all kinds, whether they’re vectorized or not, whether you’re doing search on them or not. You can use MCP to execute tools that do things like conceivably you could write… I think we even have examples of MCP servers that do airline reservations and travel reservations and things like that, right? And and those would be tools That you would provide to the to the LLM through their context window.
A more concrete example, perhaps, is one of the most popular MCP servers is the GitHub MCP server. And the GitHub MCP server can do things like open pull requests, or open issues, or close issues, or things like that. Those are specific actions, you know, that are taken, and that’s something that The LLM can decide it wants to do it, but then it has to use the MCP server to actually do it.
Jamie : Ah, okay. So Let’s say I have a Git repo. It’s on GitHub. I’ve pulled down the repo, done some work on it in a fork perhaps. I’ve been sitting there, maybe I’ve been using an AI to help me write the code, maybe I haven’t, doesn’t matter. Some code has been generated. I can then perhaps say to—and this is me just pulling an idea out the out of the thin ai—I could probably say to the GitHub MCP server “hey, GitHub MCP, have a look at my local changes, come up with a pull request, link it to this issue, and submit that on my behalf.” Is that the kind of thing that we can do?
Mike : You you can do that. And as a matter of fact, you wouldn’t even typically have to say, “use the GitHub MCP server to do this.”
Jamie : Right.
Mike : You could tell the LLM, “I want to create a pull request, link it to this issue, and submit my code.” And the LLM should be able to figure out, “oh, I’ve got some tools that do that. I’ll just use those tools.”
Jamie
:
Right. That yeah, okay. Okay. So then they’ll sort of click into place, as it were. That’s yeah, that’s really cool.
And then because my LLM is then talking to… is it… am I getting the thoughts right then? My LLM talks to an MCP thing. You said MCP server a few times and I’ve parroted it back to you ‘cause like that’s what you’ve used. So are they all servers or is it server in the traditional sense of there is a box somewhere on the internet that my LLM is talking to? Or is it just it can be an app that runs on my computer? Like how does that work?
Mike
:
Yeah, so a couple of things I want to say there.
One is In the MCP architecture, there’s a thing called the host. The host is really kind of like the AI application, and it uses the LLM and it talks to the MCP servers. And when the LLM decides that it wants to use an MCP tool or access an MCP resource, it doesn’t go and do that directly. It comes back to the MCP host and asks the MCP host to call a tool with a particular set of parameters, or to access an MCP resource.
And at first when I saw this in the MCP architecture, I thought, “boy, that’s clunky. Why not have the LLM just just call these things directly?” And there’s a deliberate reason why it was done this way. It’s because MCP wants to keep humans in the loop. It wants the user of that MCP host, that ai application, to know when an LLM is going to use an MCP tool, what it’s going to try to do with it, and to decide whether it wants to allow the LLM to do that, whether it wants to allow that MCP tool to be called or not. So keeping humans in the loop, making sure that there’s control, you know, not letting the LLM go, you know, rogue and delete all your GitHub repos, right.
So that’s the first thing that I wanted to say is that is that there’s an MCP host that’s that’s sort of the the traffic cop that’s making sure that all the things that are going on are are kosher.
Now, you asked another question about is this a server or is this running on my machine? And the answer is it could be either. The MCP protocol has a set of transports which are the ways that the host talks to the MCP server. One of the transports is is what’s called stdio. And this is the way language servers work today. I said that, you know, language servers was sort of the inspiration for MCP. The language server runs locally and your editor talks to it over pipes, basically, stdio. And so you can run your MCP server that way. It’ll run locally. It’ll run on on your machine. Nothing goes out on the internet. Or there’s another transport that’s called the streamable HTTP transport. And that will allow your MCP server to be running on another machine somewhere and you talk to it over HTTP. So it could be either.
Now, there’s some very important differences between these two transports. When you’re talking stio, it’s basically your server. It’s your MCP server because it’s running on your box. There’s only one person talking to it, right? It’s you. Whereas if you’re using the streamable HTTP transport and that thing is running elsewhere, that MCP server could be talking to you and it could be talking to your neighbour and it could be talking to somebody halfway across the world all at the same time. It’s just like a a a regular web API, right? In that it’s sitting there on the internet, you’re talking to it, and many other people could be talking to it at the same time.
This has some security implications that, you know, you need to make you need to be careful about. There’s authentication and authorization built into the MCP protocol to try and make sure that all the all the right security things are done. But it’s a new protocol, so we’re learning, you know, how we can improve this as things go on.
Jamie : So that that fills in a whole bunch of gaps that I had in my knowledge, so thank you for that.
Mike : Okay.
Jamie
:
So just real quick then picking up on the security aspect. So you’re saying I can have an MCP host that talks to an app of some kind, or a server of some kind that may reside on my machine, that I still as a human need to okay everything it does, but it could also do the equivalent of a rf -rf (“rimraf”), or a you know, delete system32 or whatever the Windows version is now. It could still do those malicious steps as long as I say to it, “yes, do that,” right? So I still need to be critically thinking, “what is it that it wants to do and how does it do that?” I think that’s kind of an important point, right?
Mike
:
Yes. Yes. And, you know, when we say run on your machine, you know, one of the things that people have decided to do, because when you run code on your machine, you really need to trust it. I mean, it’s like anything else. You know, once once the code is running on your machine, it has access to whatever is on your system. So what some people are doing is they’re running the MCP server locally, but they’re running it in a Docker container as a way of isolating it from the resources of the rest of your system.
So that’s one thing that you can do. But yeah, running things locally is, you want to really make sure that that’s a trusted application. And that you are, as you said, validating any of the actions that it’s going to take before it takes them to make sure that it’s not going to do anything destructive.
Jamie
:
Yeah, because I can imagine… in fact no, I have seen this. One of the things that I do a lot of work with is I sometimes use Claude Code. And Claude Code is this age agentic thing, and it can suggest, “I can do this thing for you. Let me let me delete this thing,” or, “let me rewrite this thing.” And if you just keep going “yes, yes, yes, yes, yes,” then you’re not really paying attention to what it’s doing.
But you have to really take a moment to, like you said, read through, you know, “your your MCP server is going to do, your MCP thing is going to do this step, this step, and this step. Are you happy with that?” And then you can say yes. Or perhaps, I mean, I’m just gonna ask this question now: I can say, “no and say don’t do step two?” Like is that something that I can do?
Mike : MCP, it depends upon how the LLM presents these requests to the host, and how the host presents them to the user, right. And in in the MCP parlance, it’s all broken down to, “call this tool with these parameters, access this resource.” So it would be presented to the user, hopefully, with that information and whether that’s step two or whatever would be up to the to the host.
Jamie : Okay, I mean that makes sense, right? So, what we’re saying is that—and we’ll come around to it in a minute—when folks are building their Model Context Protocol-based apps, they need to be thinking about, “what information do I need to promote/present to to the user in a user-friendly language, “right?
Mike : Yes.
Jamie : A lot of devs, a lot of engineers spend time… especially if you’re working on APIs like we talked about earlier on, spend their time writing messages for other engineers to read. But now we’re entering into a world where your message could be intercepted by an LLM or by the MCP host and shown to the user, right?
Mike : Right.
Jamie : So you need to, I guess, change the way that you write those messages, such that a user, because anyone can use an MCP, right? You don’t have to be a developer to use one. So your information then I suppose has to be in a format that Joe blogs the user, right, or Jane, you know, Jane Smith the user, or or indeed You know, Mike the software engineer can read that, understand what it means, and understand what’s gonna happen.
Mike : Yep. Yep.
Jamie : Okay, so we’ve been talking in sort of vague terms about what MCP is. Let’s say I have an idea for some MCP server, some MCP thing, some thing that I want to expose via the MCP host to an LLM and to a user. I have a tool, I have a system. And I want to do that in. NET. How am I going to do that, Mike?
Mike
:
Well, you use the MCP C# SDK is one way to do it. And that’s the SDK that that I’ve been PMing, product manager for for a while now. It’s a very nice system. You can build either stdio. or streamable HTTP servers with it.
The streamable HTTP servers are built on top of ASP .NET Core, so If you’re a. NET developer building web APIs, this will be very familiar to you. And you in your program.cs, you do some setup and then when you build your app. You say app.MapMcp, much like you would have MapGet or MapPut or whatever. This maps the MCP endpoints and you define a set of tools, you define a set of resources. There’s also something called prompts, which I didn’t talk about much. But prompts are a way for the MCP server to provide sort of canned instructions to an LLM. that would use its tools, or use its resources in the way that they were intended. And so all of those things can can be provided
You can write all those in in your MCP server with the MCP C# SDK. And there are a bunch of examples. There are some sample projects that are built right into the into the repo. One of the things that I’m doing now is I’m trying to build out some of the conceptual documentation for the SDK. We have our reference documentation that shows all of the different classes and features that are available, but it doesn’t sort of describe, “okay, well here’s how you would use them.” That’s that’s the role of conceptual docs that that people actually have to write and and we’re working on putting those together now. But yeah, just pick up the MCP C# SDK and go to town.
Jamie : Is that like… I can’t remember the the the phrasing y’all use. Is that like RTM’d or release candidate-ised? Or is that fully released? Is that still a work in progress?
Mike
:
It’s in preview right now. And it’s in preview primarily because the MCP spec itself is still evolving. You know, in .NET we’re very particular about declaring something as GA or stable. Once it’s GA, we don’t plan to make any breaking changes to it. We expect people to be able to continue to use it and and not have to absorb breaking changes. And because the spec, the MCP spec itself is still evolving, we haven’t gained enough confidence yet to be able to say, “okay, we can actually absorb any changes in the spec and present them in a non-breaking way to SDK users."
So it’s still preview, but it’s we we do our best to try and minimize any breaking changes between releases. We had a big update a couple months ago when they released the newest version of the spec. The newest version is—they’re date-based. So 2025-06-18 is the current latest version of the spec, and it was released on that date. We had a few minor breaks maybe there, but but nothing nothing really major. There should probably be another spec coming in September-ish, and we’ll see whether there’s breaking changes at that point; but we will try to minimize them so that people won’t have to you know, make big changes to their app in order to adopt the new version.
Jamie : Right. And just for everyone listening, obviously we’re recording this a little bit ahead of when we’re releasing, so
Mike : Ah yes.
Jamie : When Mike says, “there should be a version coming out in September,”" he means last month if you’re listening to this in the present, which is the future for us, but will be the past for the listener.
Mike : Time is a crazy concept.
You know that moment when a technical concept finally clicks? That's what we're all about here at The Modern .NET Show.
We can stay independent thanks to listeners like you. If you've learned something valuable from the show, please consider joining our Patreon or BuyMeACoffee. You'll find links in the show notes.
We're a listener supported and (at times) ad supported production. So every bit of support that you can give makes a difference.
Thank you.
Jamie : So I wonder then, what is that like if you’re creating a library, or an app, or something like the MCP server for C# SDK, against what is effectively a moving target, but once you hit GA it has to stop moving, but you’re not in control of how much it’s moving? Like how how is that even possible to do?
Mike : Yeah, it is very challenging. What even makes it more difficult is that the primary languages that Anthropic, which is currently the, you know, the the guider of the… the originator and and sort of core maintainers of the MCP spec. They’re working mostly in TypeScript and in Python, which are relatively loosely typed languages. Now there is typing in them, but you can do things like have unions of an array and a string.
Jamie : Right.
Mike
:
Which is not something that we can really do in C#. So they might do things at the protocol level that aren’t sort of technically breaking at the protocol level. But when we think about what the C# interface looks like, it winds up being breaking at the C# interface.
So it is very challenging, and we’re still figuring it out. You know, we do have members of Microsoft in the MCP steering committee so that we can provide input on changes and, sort of, make them aware of when their changes might impact the C# SDK and its users. But in the end, you know, they’re viewing this as, “well, some of these changes have to happen and if they’re not breaking at the protocol level You know, you’re gonna have to figure out how to handle it.”" And so we’re working on that right now.
Jamie
:
My goodness. That’s… yeah, a moving target for a moving target. Yeah, I do not envy you, Mike.
Okay. So obviously one of the things that folks, when they build their apps, are very keen on, we’ve mentioned it a few times is tests. Right. I have attempted to figure out how to write tests when an LLM can be involved and things there get a bit wobbly. And I know some advice that I’ve received previously is, “you can probably test everything up to the LLM, but not the communication with it, because obviously it’s non-deterministic.” Now, as an MCP, you’re receiving information from the LLM. And since you’re going to be, as an MCP author, you’re going to be taking that information, doing some stuff. And returning the output, right? But via the LLM. From a testing perspective, what does that look like? Because I think I just broke my brain trying to think about how it works.
Mike
:
Well, I think you’re right that we have to be prepared for whatever the LLM sends us. But that’s okay because That part of it is really not much different than being prepared for what any user, I mean, any user could send us anything anyway. So that part of it is not so bad.
But what people, some people are doing now is they’re actually building MCP tools that underneath the covers have AI models built into them. That then does become challenging in the way that you described, where I call an MCP tool, and one time I call it it gives this input, and the next time I call it it gives some other input; because the LLM that was used underneath the covers decided to give something different. This is, it’s a brave new world in terms of you know testing when AI is involved.
There are some things that you can do. So one of the things is a lot of the models have a parameter that will limit the creativity, if you will. of the response. OpenAI’s models have the ones that I worked with anyway have a a parameter called temperature Which is kind of an interesting name for the parameter. But you can set the temperature so that it’s so that its output is almost deterministic, right? So you can test that way But generally you don’t want to run that way because you want the LLM to be creative. That’s kind of why you’re using it.
So it is a challenge. It’s a real challenge.
Jamie : Right. So one of the things that you said there kind of blew my mind is that some people are creating these with, my words, not yours, but, “AI all the way down.”
Mike : Yep, yep, that is true. That is true.
Jamie : What? And then they’re probably using Copilot or something like that to write the code in the MCP server, right? So it literally is AI all the way down.
Mike : AI all the way down. I think that’s right. That’s that’s where we’re going.
Jamie
:
Wow. Wow. I mean it kind of makes sense, right? We’re on I’m no expert on this, but I think we’re on a trajectory, whether it’s a fast trajectory or a slow trajectory, perhaps away from human-centric programming languages. Where eventually we might be able to describe to the computer, and this is me, you know putting on the the the hat of like, “if I were a science fiction author, this is what I’d be thinking.” I describe to the computer the shape of the world that I want it to be in.
‘Cause when I… okay, let me dial back a second. For what I’m about to say, I always describe to non-developers, non-engineers, non-software engineers, a computer program is just a finite state machine. You’re altering the the computer to be a different finite state machine based on what you want to achieve. Then when I get asked, “what does that mean?” I say, “you’re making a cup of coffee, right? What do you need to do?” The steps to make a cup of coffee is the finite state machine for making that cup of coffee. There are only so many steps you can get into.
If we’re at a point where people are already using AI top to bottom, all the way down, and probably using AI to write it. Personal opinion: maybe we’re on a trajectory towards not using human language-centric programming languages anymore. And that we might just be able to, at some point way off in the future, describe to the computer, “this is the ideal situation that I want you to solve this problem in. Here are the rules, here are the inputs and outputs you should expect. Go solve the problem for me.” And it goes away and does it, right?
Mike
:
I think we are approaching that. We are approaching that. I mean, one thing that already is the case is that the barriers to languages, to programming languages are are becoming lower, right? It used to be that people would say, “well, I’m a C# developer and I know C# and ASP .NET, and that’s what I program in,” right? But it’s getting more and more that, well, if you needed to write something in Python, you don’t have to be a Python expert. You can ask the LLM, you can ask your co-pilot to build you this Python script, and often it does a pretty good job. Now, you know, if it’s something that’s mission critical, you still want somebody that knows Python to have a look at it and make sure that it’s doing what you want. But, and whether we actually get to the point where we ditch programming languages all together and just (use) plain language, I probably am not gonna see that. But maybe at some point.
So I actually I wanted to go back a little bit to the point about AI all the way down because there’s a word that we haven’t used yet that is getting used quite a bit. And that’s agents. I think more and more people want to think about these things as if they are agents, as if they are assistants, if you will, and you can just ask your agent to do something for you and it goes off and gets it done. And if your agent has to use some other agent in order to get that done, it can go do that. And You know, this is something that we’re working on. We’re working on making sure MCP can can let you do, whether it’s just having that AI model in the back end or calling other MCP servers from your MCP server. That’s something that’s that’s where we think things are going.
Jamie : And that makes sense, right? If you take that metaphor and put it into like the human world. Most people still book a vacation through a travel agent, right?
Mike : Yep.
Jamie : And a travel agent is someone who works on your behalf to make sure your flights are booked, your hotel is booked, your transport to and from the hotel and the airport perhaps is booked. Maybe you say to the travel agent, “I also want to go on some kind of excursion,” say I’m going somewhere famous like, say, Egypt. I also want to spend a day looking at the the Great Pyramids. Or maybe I’m going to somewhere in India, I want to be able to go see the Taj Mahal. They will figure all of that out for you. You’re giving them the vague instructions of, “I want to go to this place for a vacation, figure it all out.” That’s kind of what we’re saying about MCPs and agents, right? “Go do this. I don’t care how you do it. Go figure this out.”
Jamie : Right.
Jamie : Right? “Book me a holiday,” “book me a vacation,” or “create a pull request,” or “this code that exists on my machine needs to get up to GitHub. Figure it out,” right?
Mike
:
Yep. Absolutely. I just did this myself the other day.
I’m got a trip to New York City coming up, and I’m landing in JFK, and I have a hotel in downtown Manhattan. And I needed to figure out how to get from the airport to the hotel. And I just asked ChatGPT. It told me, you know, “use this ground transportation, and then hop onto this subway and it’ll take you right there.”
Jamie : Which you could have Googled yourself, but presumably it’s Googled for you to get the information, right?
Mike : Yep. Yep.
Jamie : And that’s what an agent is for. “I don’t know how to figure this out. You figure it out.”
Mike : That’s right.
Jamie : I love it. It’s exciting times. They say it’s scary, but also exciting. Amazing.
Mike : Yes.
Jamie
:
So, Mike, we’re rapidly running out of time. I wish I could sit here and talk to you all day, but you’re a busy person, so we’ll let you have the rest of your day back to yourself.
But before you disappear, somebody’s been listening in and going, “I really like this sound of this OpenAPI stuff and what’s happening with ASP .NET Core with, like, validation and things like that.” Or they’re listening in going, “MCP sounds amazing. I want to get started with the MCP SDK C# and other three-letter acronyms.” How do they go about learning all of that? I know I’ve got some resources from you. I’ll put those in the show notes, but like is there anything that comes to mind immediately? “Hey, go learn this first.”
Mike
:
Well, so the one link that I’ll point out from the set that I gave you is the set of Microsoft MCP servers that’s in the Microsoft MCP GitHub repo. There’s a long list there. And I mean it’s remarkable how quickly we’ve moved, right? This is a protocol that’s not even a year old, and already we have, you know, over a dozen, maybe over 20 MCP servers that we’ve got in that list that are built by Microsoft, that are available now, that people can use.
Go look at that set of MCP servers. You probably want to, you know, try out some of those. And then that will give you some inspiration. That’ll say, “oh, now I see what MCP can do for me. Now I want to go build something that does something specific to my use case.” Then you can go find the C# SDK and build yourself your own MCP server, which is something that you can just, as we talked about before, run locally on your machine and it’ll just do things just for you.
Or if it’s something that you think other people would benefit from, you could figure out how to Host it in Azure. Something that we didn’t talk about actually is that you can host MCP servers in Azure Functions, which is a pretty cool thing, because Azure Functions is a sort of a pay as you go. You only pay for the resources that you use so that you can host something out there, and then people can use it or not, and you’ll only get billed for what gets used. So those those would be my call to actions.
Jamie : Right. Then if folks wanna maybe keep an eye on what you’re working on, I know some people aren’t on socials and I know that there is one social media network that is… has changed a lot recently is how I’ll put it. It changes the way that it works. Are you a social media person? Some people aren’t.
Mike
:
You know, I’m not real good about social media, but the the two places that you can find me are LinkedIn and I do try to respond to people who connect with me on LinkedIn or send me messages on LinkedIn. And Blue Sky, I’m also on Blue Sky. Both places, it’s Mike Kistler, very simple, easy easy to find.
And then the other place that is great for hearing about what’s going on at ASP .NET Core is the. NET Community Standup, which happens every Tuesday at 10am pacific time. I’ve been a a pretty regular guest on that, what do you call it, stream? And we talk about the things that are coming up in in ASP .NET Core, things that we’re talking about. Sometimes we just open it up for people to tell us, you know, what are you thinking? And we try to get feedback on where we need to take the the product going forward. So those are great places to find me.
Jamie
:
Awesome. I’ll make sure to put those links to those in the show notes, so that nobody has to go away and uh You know, do some Googling or Binging or whatever to find those. They’ll be in the show notes, folks. Just don’t look at the show notes while you’re driving. That’s my only ask.
Maybe there should be an MCP server for The Modern .NET Show. And then people can ask, “what was that website that Mike recommended?”" Maybe that’s what I should… maybe that’s what I should do.
Mike : Oh please, please. And and if you do that and run into any trouble, let me know and I’ll be happy to help.
Jamie
:
Ah, thank you very much, Mike. Thank you very much.
Well I mean, like I said earlier on, I feel like I could talk to you all day but you’re very busy, so let’s get you back to doing the stuff that is important to you I have had an absolute blast talking to you today.
Mike : Likewise.
Jamie : And I’m walking away with way more information than I did when I walked in. And like I said, you may have just inspired me to try and make an MCP server for the podcast.
Mike : That would be great. That would be great. Thank you, Jamie. Thank you for having me. It’s been a blast. I I’ve really enjoyed it.
Wrapping Up
Thank you for listening to this episode of The Modern .NET Show with me, Jamie Taylor. I’d like to thank this episode’s guest for graciously sharing their time, expertise, and knowledge.
Be sure to check out the show notes for a bunch of links to some of the stuff that we covered, and full transcription of the interview. The show notes, as always, can be found at the podcast's website, and there will be a link directly to them in your podcatcher.
And don’t forget to spread the word, leave a rating or review on your podcatcher of choice—head over to dotnetcore.show/review for ways to do that—reach out via our contact page, or join our discord server at dotnetcore.show/discord—all of which are linked in the show notes.
But above all, I hope you have a fantastic rest of your day, and I hope that I’ll see you again, next time for more .NET goodness.
I will see you again real soon. See you later folks.
Useful Links
- OpenAPI
- API Blueprint
- RAML
- ProducesResponseType attribute
- Minimal API
- TypedResults
- S07E16 - From Code to Cloud in 15 Minutes: Jason Taylor’s Expert Insights And The Clean Architecture Template
- GitHub MCP Server
- MCP Transports
- MCP C# SDK
- Current version of the MCP spec as of the date of recording (aka version 2025-06-18)
- Microsoft MCP Servers List
- Mike on LinkedIn
- .NET Community Standup
- Supporting the show:
- Getting in touch:
- Podcast editing services provided by Matthew Bliss
- Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show
- Editing and post-production services for this episode were provided by MB Podcast Services