The Modern .NET Show

S07E17 - Google Gemini in .NET: The Ultimate Guide with Jochen Kirstaetter

Sponsors

Support for this episode of The Modern .NET Show comes from the following sponsors. Please take a moment to learn more about their products and services:

Please also see the full sponsor message(s) in the episode transcription for more details of their products and services, and offers exclusive to listeners of The Modern .NET Show.

Thank you to the sponsors for supporting the show.

Embedded Player

S07E17 - Google Gemini in .NET: The Ultimate Guide with Jochen Kirstaetter
The Modern .NET Show

S07E17 - Google Gemini in .NET: The Ultimate Guide with Jochen Kirstaetter

Supporting The Show

If this episode was interesting or useful to you, please consider supporting the show with one of the above options.

Episode Summary

In this episode, we spoke with Jochen Kirschtetter, known as Joki, a senior software developer, community founder, and Microsoft MVP for developer technologies. We delved into his work on an SDK for .NET that allows developers to effectively use Google Gemini, Google’s latest generative AI model. Joki shared insights into his background, including his journey from Germany to Mauritius, where he has made significant contributions to the tech community.

We began by discussing how Joki identified a gap in the offerings for .NET when exploring Google’s SDKs for Gemini. Recognizing that while Google provided SDKs for several languages such as Python and Java, a corresponding SDK for C# was lacking, Joki took the initiative to fill that void. He explained how his project started during the Gemini Sprint, where Google sought community feedback on the Gemini technology. Joki detailed his process of studying existing SDKs and the REST API to create a seamless experience for .NET developers.

Our conversation moved to the technical aspects of Joki’s SDK, where he elaborated on the design decisions that enabled compatibility with both Google AI and Vertex AI. Unlike other Google SDKs that maintained separate libraries for each variant, his SDK integrates both functionalities. He emphasized the importance of allowing users to choose between a simple API key for Google AI or an authenticated approach through OAuth 2.0 for Vertex AI, offering flexibility for developers.

The discussion also covered the different authentication methods utilized by the SDK, highlighting how it achieves a modular design to cater to varying deployment scenarios. Joki’s approach allows the SDK to be lean while offering additional capabilities if used in a Google Cloud environment. This adaptability pleased him, as it empowers developers to use the SDK according to their specific project needs.

This episode serves as an excellent resource for .NET developers interested in leveraging Google’s generative AI technology. Joki’s passion for sharing knowledge and fostering community engagement in the tech world shines through, making it a valuable listen for anyone looking to expand their understanding of AI integration within their applications.

Episode Transcription

So on my side it was actually, the interesting experience was that I kind of used it one way, because it was mainly about reading the Python code, the JavaScript code, and, let’s say like, the Go implementations, trying to understand what are the concepts, what are the ways about how it has been implemented by the different teams. And then, you know, switching mentally into the other direction of writing than the code in C#.

- Jochen Kirstaetter

Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie “GaProgMan” Taylor.

In this episode, Jochen Kirstaetter joined us to talk about his .NET SDK for interacting with Google’s Gemini suite of LLMs. Jochen tells us that he started his journey by looking at the existing .NET SDK, which didn’t seem right to him, and wrote his own using the HttpClient and HttpClientFactory classes and REST.

I provide a test project with a lot of tests. And when you look at the simplest one, is that you get your instance of the Generative AI type, which you pass in either your API key, if you want to use it against Google AI, or you pass in your project ID and location if you want to use it against Vertex AI. Then you specify which model that you like to use, and you specify the prompt, and the method that you call is then GenerateContent and you get the response back. So effectively with four lines of code you have a full integration of Gemini into your .NET application.

- Jochen Kirstaetter

Along the way, we discuss the fact that Jochen had to look into the Python, JavaScript, and even Go SDKs to get a better understanding of how his .NET SDK should work. We discuss the “Pythonistic .NET” and “.NETy Python” code that developers can accidentally end up writing, if they’re not careful when moving from .NET to Python and back. And we also talk about Jochen’s use of tests as documentation for his SDK.

Anyway, without further ado, let’s sit back, open up a terminal, type in dotnet new podcast and we’ll dive into the core of Modern .NET.

Jamie : [0:00] So, JoKi, welcome to the show. We’ve not been properly connected before, and as we said in our previous bit at the beginning, offline, outside of the recording, it’s taken a couple of hours for us to get fully set up with this, because things got in the way, like Halloween and family things got in the way. But, you know, to set expectations for everyone, we’re recording this on 28th of November 2024. So that’s that been said: Welcome to the show, Joki. It’s great to have you with us.

Jochen : [0:31] Thank you so much, Jamie, for the invitation. Let me introduce myself a little bit. So my name is Jochen Kirstaetter, also known as Joki. I’m a senior software developer crafter, blogger, community founder, as well as regular speaker.

Jochen : [0:47] Yes, we are both in the Microsoft MVP program. That’s where we connected initially. So I’m, again, Microsoft MVP, this time for developer technologies. Apart from that, I’m also a Google developer expert for Google Cloud. And as you know, these are kind of recognitions regarding community contributions, the passion to share knowledge, and also my activities to engage with other IT people here on the island.

Jochen : [1:19] Oh yeah, talking about island, let me tell you that I’m living since now close to 18 years in Mauritius, even though I’m from Germany. So again, thanks for having me on the show.

Jamie : [1:33] No worries at all. No worries at all.

Jamie : [1:39] So we’re going to be talking about stuff related to AI. Now we’ve covered generative AI a whole bunch of times on the show. ut previously when we’ve talked about it, we’ve talked about it from a technology agnostic perspective, right? But I thought that because you have written, essentially an SDK for .NET to allow folks to use Google Gemini. So maybe we could talk about that. Is that a good thing? Yeah.

Jochen : [2:10] Yeah. Perfect.

Jamie : [2:11] Okay. So, let’s talk about that then. So, how, I guess, okay, right. So, we already know what it is. It’s the .NET SDK for Google Gemini. But could you give us like an overview of what that actually means? Like, what can I do with it? That kind of thing.

Jochen : [2:29] Okay. So, initially, I started it end of February. There was something called the Gemini Sprint. And Google was actually asking community members about what could be done with Gemini. And I had a look and I discovered that there are official SDKs provided by Google for different programming languages like Python, Go Language, Java, Swift even. And I was like, “come on, there’s no C#, there’s no .NET. Let’s give it a shot, let’s give it a try.”

Jochen : [3:05] And I discovered that also their examples were provided with curl command line statements. So directly communicating with the REST API. And I thought, “okay, that should work. I take the examples for REST API and, you know, take my knowledge from .NET, especially then in regards to the HTTP client, HTTP factory, and pick it up and provide a similar experience from the other SDKs for fellow .NET developers.” That’s literally how it started.

Jamie : [3:45] Okay. So then, okay. So one of the things that I know is that I’ve chatted with John Skeet, who at the time, I’m not sure if he still is now, but at the time was working for Google on the .NET SDK that he brought out of there. So I know that they have an SDK. I don’t know.

Jochen : [4:06] Yes, yes, yes.

Jamie : [4:07] A full SDK for everything, right?

Jochen : [4:10] The thing is that I discovered at a later stage that there are the client libraries from Google for .NET. And I have to say they are using a different kind of approach. It’s called the prediction service. And the examples that they are offering in their GitHub repository, they just look too complicated. Because you had literally to inject your JSON structures as strings into your actual .NET code in order then to make the requests.

Jochen : [4:51] And looking at the other Gemini SDKs, especially the one for Python, because this seems to be the primary SDK that’s coming from the Google Teams, I thought that the existing one, yes, developed by John Skeet and others, it just didn’t feel the right way. It was completely different to the existing Gemini SDKs. And so I thought, “no,” I mean, you know, you get all the examples in Python, in Go, in JavaScript. And then you look at the prediction service and it’s like, “whoa, this is like English. And somewhere it’s like French.” And so I thought, “no, come on, I’m going to write the SDK from scratch based on the REST API. " That it looks and feels and behaves exactly like the other official SDKs by Google.

Jochen : [5:53] And the result was that I only discovered the prediction service at a later stage, so I was already set to write it really from scratch in a way with the, let’s say, method signatures, the objects involved, based on the Python SDK.

Jamie : [6:16] Right, okay. That kind of answers my next question which is how did you plan out what the API should look like right? iI you’ve already got a an API in a different language that looks kind of how you want it to behave , then you can use that as your design document guess. Is that is that how you went about it?

Jochen : [6:40] Exactly. It was literally my blueprint about, “okay, this is the way how the Python SDK works with the classes, the objects, the methods. " I compared it. I saw quite a number of differences towards the JavaScript SDK. I also had a look at the Go SDK. And then I was like, “okay, that is some cool fashion or cool way how they implemented it here. That’s a cool way about how to implement it there.” and I thought, “okay, I can adapt the best of both and put it based on the way how you would implement things the .NET way,” and that’s exactly how I went forward.

Jochen : [7:26] The .NET SDK that I wrote effectively has all the functionalities and features that the Python SDK is providing, and also since back in March when I started working on it I also kept track on all the new additions that came from the Google team, and I’ll try to keep it on par as quickly as possible. But yeah, as it is an open source project, anyone can also chip in and you know provide pull requests, provide suggestions, and you know open up discussions about certain things where we’re really happy and glad to be involved.

Jamie : [8:14] Cool, okay. So we’ll talk about the open source nature of it in a moment and where folks can go to have a look at the source code; But I know that from my own sort of experimentation–and long-term listeners of the show will know that I plan out questions and topics with guests, so we’ve got a bit of a planning document–and I know that the Google Gemini API has two, kind of, flavours guess. Is the right word? I wonder if you could talk to either of those and what the differences are?

Jochen : [8:43] Yeah. Absolutely that’s right. So when you hear about Google Gemini it’s you know like this Spider-Man meme where they are pointing at each other and there are different incarnations of Gemini. And this also applies to the Gemini API, which comes actually in two variations. One of them is called the so-called Google AI, which is related to the Google AI Studio, which is like a publicly available website where you can experiment with Gemini for free. And it only requires you to have an API key that you can generate, and then you can use it and communicate with the Gemini API just based on an API key. So that’s the Google AI part, which is kind of like, yeah, publicly accessible, not really high security involved. That’s on the one side.

Jochen : [9:48] And then I would say it’s like a big brother that you have the Gemini API from Vertex AI. And Vertex AI is a service that comes from Google Cloud Platform. So meaning there you then have your proper project organization. You need to have a project ID. It’s connected to billing. You’re going to have the possibility that you can specify locations all around the world because, I mean, Google Cloud Platform operates in now, I think, 26, 27 regions. So when you use the Gemini API against Vertex AI, you can be more specific regarding where your audience is. It’s also then that you need to have a proper authentication. It’s based on OAuth 2, I think.

Jochen : [10:45] It is bound to your personal space in Google Cloud based on your project ID. And it’s more secure in such a way so that you can better integrate it as well with maybe other services or applications that you’re already running on Google Cloud Platform; or maybe cross-cloud that you run on Azure and you want to have a proper, secured way that your app or app service is communicating with the Gemini API, not just based on a simple API key, but on proper OAuth-based authorization and authentication.

Jamie : [11:25] Right, okay. So then there’s a whole bunch of detail there, JoKi, that we’ll talk about in a moment. I’m doing this, “push the detail back down the pipe a little bit,” as we’re moving along. But you’ve built this SDK, there’s effectively two flavors for Gemini’s API. So does the SDK that you’ve built support both of those APIs, the Google AI and the Vertex AI API, or is it just one? Or how does that work?

Jochen : [11:56] Absolutely, yes. So my SDK for .NET is implemented in such a way that you can use it against both endpoints. So you can use it with an API key then it operates automatically against the Google AI API.

Jochen : [12:16] Or you specify your project ID and your location or like it tries then to retrieve an access token out of the environment of your application itself, otherwise you can also specify the access token and then it communicates, makes the request against the API running in Vertex AI.

Jochen : [12:37] So, yeah, meaning that my SDK supports both endpoints at the same time. Which is actually quite funny because when you look at the official SDKs provided by Google, is the situation that they are offering two SDKs separated for each endpoint. Even though that it is literally like the same Gemini API and the models. However, there’s the Python SDK for Gemini on Google AI, and there’s the Python SDK for Gemini on Vertex AI.

Jochen : [13:20] And funny story, back in July, I met with one of the guys operating in the team. It was actually at the Google I.O. Connect in Berlin. And I was like, “what’s going on? Why do you have two teams, you know, working more or less hand in hand, maybe even independently on two SDKs instead of them putting together?” And he was like, “yeah, it’s on our roadmap.”

Jochen : [13:50] So I would say that .NET SDK is ahead of the plan. And you can easily take advantage of both endpoints in one package, no code change. You decide whether you want to use an API key or if you want to switch over, literally migrate, as they call it, to Vortex AI by going for your project ID and your location and you’re done.

Jamie : [14:18] Right. That’s super interesting to me that they’ve, for one reason or another, produced two separate APIs, whereas you have been able to combine them for the app… for the .NET SDK Interesting.

Jamie : [14:33] I know that you said that you spoke to one of the folks behind it and they said “it was on the roadmap,” to perhaps combine them. But yeah, I would be very interested about why they built them separately. I guess it’s a case of, “hey, we’ve got two APIs, we’ve got two teams, let’s build everything together.”

Jochen : [14:53] I would assume that it was like, “okay we have here our Gemini models on Google Cloud.” And then they came up, “we need more publicity; we need more accessibility,” and they started to provide it then in the public Google AI studio. Which actually under the hood uses Gemini running on Vertex AI. So it seems, it feels like that the Google AI based version is like a delegate or frontend for Google to see what are the capabilities of the Gemini models running in their Vertex AI environment.

Jamie : [15:39] That makes sense. That makes sense. interesting .

Jamie : [15:44] Okay, so there’s loads of detail that we, sort of, glossed over so far. But I’m interested in talking about auth, because auth and security is something super important. And know that if i’m using say an Azure API, I can do standard sort of OAuth stuff to authenticate with it. But i do know that there is sometimes this fun, strange, very google specific way to authenticate with some of their APIs. And I know that in our planning session you said there was like a very Google specific way to do that auth workflow. So I wonder if it is totally different to a standard OAuth or Auth0 workflow. And I’m wondering, can you talk about how that all works?

Jochen : [16:39] Yes. The concept how I did it is actually that the SDK comes in multiple packages. So meaning I have like a base package: Mscc.GenerativeAI. Which is really the way of how I implemented the client with the absolute minimum, or hopefully no dependencies. Okay, I need System.text.Json and other namespaces from .NET. However, I wanted to avoid to have any kind of additional third-party libraries.

Jochen : [17:27] So that’s the base package. And what it does is that it tries to use, for example, the gcloud CLI command line tool in order to use your credentials to retrieve an access token. So this is what’s happening under the hood of my SDK, which is also described in the documentation about how you would run the other SDKs against Vertex AI. However, I also have then a second NuGet package that is called Mscc.GenerativeAI.Google, and it adds the official Google client libraries in order to take care of the authentication parts.

Jochen : [18:16] Meaning, so if you would deploy your application into Google Cloud environments, then you would actually take the NuGet package, Mscc.GenerativeAI.Google, because it already has the Google Cloud client libraries integrated and it takes then care with the provided functionalities by Google in order to take care of authentication, also refresh token, providing the access token, and so on, and so on.

Jochen : [18:50] However, you might like to deploy it on your own server on a virtual server. Maybe you like to deploy it on another cloud. And then you can still leverage the abilities that you are not dependent on the Google Cloud client libraries So you can keep it lean and slim because you might have other ways of handling the authentication. And that’s the idea about having it separated so that you can decide which flavor of my SDK that you would like to go for.

Jamie : [19:33] Right. Sort of like a modular approach of, “I want to have this particular authentication workflow with this particular API set,” that kind of thing.

Jochen : [19:46] Yes yes

Jamie : [19:50] Yeah, okay. So was as there a a specific–I know that we talked earlier about having the Python libraries available for you to sort of look at and figure out how they work. Were they an inspiration or like a direct inspiration for how you came to this decision for the .NET library? Or was this just a, “hey, wouldn’t it be cool if we could do it this way,” sort of thing.

Jochen : [20:20] It was a very interesting experience, I have to say. I got some some ideas from the Python SDK, then I verified it in the JavaScript SDK, which was then a little bit different. I went into the source code that is available on GitHub for the Python SDK and I was, like, a little bit head scratching. Because some of the stuff is really done and implemented in a elegant way, but then other things I was like, “hmm… that’s not the way how you would do it in .NET because there are other techniques and technologies, or approaches how you can solve things.”

Jochen : [21:08] To give an example, Python is that you have the possibility of variant parameters so that you can throw in a string, an array, an object, and the method just picks it up and it’s fine. Whereas in .NET you need to have overloads in order to handle the different types of parameters, except if you go for the generic object which you would like to avoid.

Jochen : [21:36] So that was actually an interesting, positive thing. Then on the other side, I looked at the way how the Python SDK uploads files, large files, so that you would actually put them into a temporary store and then get a URL to reference it for your queries. And they were like using a chunk based upload. And I was like, “no way. This is so old fashioned. In .NET, we have streams, and it feels more natural.” So I went forward and went for the stream implementation, and it just worked fine.

Jochen : [22:14] However, there were gaps in the documentation, so I literally had to look into the actual source code of the different SDKs in order also to work out, “what are the parameters that I need to put in my URL? What are the HTTP headers sometimes? How is the payload that I need to send in my request, in my body of the request? How is it, you know, how is the JSON structure looking like?”

Jochen : [22:48] So it was really a nice adventure about getting to know the in and outs of the different SDKs provided by Google and taking, let’s say, the better parts and implement them the .NET way than in my own SDK.

Jamie : [23:10] That’s super interesting to me, because I won’t say who, but someone I know in the Python community has said to me that the interesting thing about–so this particular person can do Python and .NET–and this particular person has said to me several times that, “the interesting thing about people who come to .NET from a Python background tend to write .NET in a very Python-y way.” And so it’s interesting to me that there are these large differences between the Pythonistic and the .NET-ish, I guess, way of writing code, in that specifically the Python libraries have these very Pythonistic way of doing things, of uploading in chunks rather than streams and things like that. Whereas with .NET I guess, because–this isn’t meant to be a a negative towards Python at all don’t think that Python is a slow moving language at all–but I feel like because .NET has a lot of these new features that are implemented in an enterprisey way–this is not meant to say thatthere is no such thing as enterprise Python but–because whilst .NET is very unopinionated there’s some opinions in there anyway about, “hey, why don’t we use the latest stuff?” So it’s interesting to me that you came across these things in the Python library.

Jochen : [24:35] So on my side it was actually, the interesting experience was that I kind of used it one way, because it was mainly about readingthe Python code, the JavaScript code, and, let’s say like, the Go implementations, trying to understand what are the concepts, what are the ways about how it has been implemented by the different teams. And then, you know, switching mentally into the other direction of writing than the code in C#.

Jochen : [25:13] So that’s why I would say that this, what you mentioned earlier about Python people tend to write Pythonistic .NET code and vice versa .NET people tending to write the Python with the .NET approach. I guess this did not happen for me because it was really this kind of one way situation about reading source code in a different language, but staying and writing just in C#. And so it was actually more of a learning experience about Python, but I don’t really have experience about writing Python code myself.

Jochen : [25:59] Later on, I tried a few examples just to see that I get into the details of how the Python SDK works, but then I tried to do it the Python way and not the .NET way. But it’s actually interesting, as you said, that you get this tendency about you continue the way of writing code, how you’re used to do it in a different environment. And I have to say, luckily, this did not happen to me as I was reading Python but writing in C#.

Jamie : [26:37] Right. Yeah, it’s an interesting one. Because, so a bunch of my dev friends that I talk to on a regular basis, we all agree that the difference–from my perspective and from all of my dev friends that I talk to about this, from our collective perspective–it’s like there are only so many ways to write an if statement, right? And it’s not the different syntax around writing an if statement or writing something into a object store or anything like that. That’s not what you’re learning when you’re learning a new language. What you’re learning is the specific ways to interact with the specific APIs that exist, and the ways that the language has been adopted by the community, right?

Jamie : [27:27] So you talked earlier on about with .NET APIs, you usually, if you want to have an API that has two parameters and then a separate one that has a third parameter, you need an overload for that third parameter. And if that third parameter needs to change type from say a string to an int, that needs to have like perhaps another overload with the three parameters where one of them is an int.

Jamie : [27:51] Whereas with the more dynamic languages like Python and things like that, you can have a almost–I think there is a limit so please bear with me do apologize to the people who are listening who are Python experts–it feels like almost an infinite arbitrary set of parameters and arguments you want to pass into a method. And then on top of that you can have keyword arguments as well.

Jochen : [28:18] Yes.

Jamie : [28:18] Much like in .NET how you can do say string foo = and then an actual value, you can kind of do the same thing in Python. But when you’re passing things in you just don’t provide the the type you just say foo = and then the value. So if you have 25parameters , most are optional, but you want to pass one in, you give it the name and then you pass it; kind of like an optional parameter for .NET. And it’s those differences that make or break a cross language developer. It’s not the, “oh yes can write an if statement in Python, and I can write an if statement in C#.” Because they’re effectively the same.

Jochen : [28:55] Yeah. That’s absolutely right. Based on the fact that .NET is strongly typed had to actually provide multiple overloads of the same method or even a constructor, in order to achieve the same outcome what the Python guys are just doing with one implementation. And I was like, “oh, they have it so nice. Less code, less code.” But yeah there are, I guess, there are always pros and cons.

Jamie : [29:29] Of course. Of course, you know. What works for one, doesn’t work for another and that’s the only right.

Jamie : [29:34] So we were talking, off air, before we recorded, when we were planning this out, about the different things that the library provides with regards to Generative AI. And and what I really like is that, I feel like all of the Generative AI providers are doing this: they’re productizing it. Like, you know, if you look at ChatGPT–I know we’re talking about Google Gemini, i’m not quite sure how Google Gemini works so bear with me , this might be the same way–but if you look at ChatGPT, what you do is you poke at an endpoint, you send a POST request to an endpoint, “here is my prompt. Here are like my temperature weightings,” and all that kind of stuff which you don’t need to provide. If you don’t know what they are, you can say, “here is my prompt, and Iwill wait for a response,” and you either get a full response back or you can say, “i’ll stream it back.”

Jamie : [30:24] And I think that that is the key thing that differentiates Generative AI APIs from, sort of, standard APIs: in that in a standard set of APIs, you are locked to what is provided. Whereas with a Generative AI API had to stop them because i was having trouble figuring out which way around the A’s and the I’s went–so in a Generative AI API it is like one endpoint. “Here’s a POST request with, this is my prompt, and give me my completion, my response.” So what I really like about the library that you’ve put together is , it takes all of that and and provides that productized experience to make it super easy for developers to connect with Google Gemini with .NET.

Jochen : [31:11] Yeah. That’s absolutely right. And, I mean, despite now that the there’s the native Gemini API andits SDKs… it’s also now there’s a new, let’s say direction happening from the Google team that they are actually adopting; or adopting the OpenAI API, that they’re providing compatible endpoints on their API. So that actually that you can, if you’re used to use OpenAI libraries, you can also then switch and integrate with Gemini based on the API key and use it the OpenAI fashion or style.

Jochen : [32:06] And so there’s quite some interesting things are happening at the moment. However, it’s still experimental. There are bits and pieces missing. So at the moment, they are only offering like text generation and so-called embeddings to produce your vectors, whereas the Gemini SDK gives you the full range of all functionalities that the API is offering.

Jamie : [32:34] Right, right. So then what does that look like? I am conscious we’re an audio podcast, so I apologise. This is a super difficult question for an audio format. What does that look like from a .NET dev’s perspective? If I want to send some text generation prompts, like I want to ask Google Gemini a question in text with the SDK. Is it just literally a case of GoogleGemini.Completion or Request or Chat or something like that?

Jochen : [33:05] Yeah, absolutely. It’s also that in my SDK, I provide a test project with a lot of tests. And when you look at the simplest one, is that you get your instance of the Generative AI type, which you pass in either your API key, if you want to use it against Google AI, or you pass in your project ID and location if you want to use it against Vertex AI. Then you specify which model that you like to use, and you specify the prompt, and the method that you call is then GenerateContent and you get the response back. So effectively with four lines of code you have a full integration of Gemini into your .NET application.

Jamie : [33:59] Wow. That’s genuinely awesome You know my own personal background to fill in why that is awesome: I was doing a whole bunch of stuff with Open AI, and Azure Open AI, and a bunch of different AI endpoints back at the… not the very beginning, but close to the beginning of when the API endpoints started becoming viable; and you had to it was less cheap, shall we say, to do all of the requests. And almost all of them, when they started, were, “yeah, literally do a POST request to this endpoint with the JSON looking like this Also we’re not going to tell you what these values mean, just send it. And then we’ll eventually come back with a response.”

Jamie : [34:47] So being able to actually do, “from a from a code perspective, I know what this object is. I’m setting some properties on it, I’m calling some methods on it,” and then Isay, “hey , go do the thing.” And then perhaps I await the response, and it just comes back… that’s, it is pretty, pretty, cool. Really cool I’ve gotta say.

Jamie : [35:09] Amazing. And, so one of the things thatkind of blows my mind, especially with a lot of things with backward compatibility, is that your your your SDK supports both modern .NET–so .NET 8 and, I guess, 7, 8, 9, not just 8 and 9–and .NET Framework. Which is an interesting stance to take.

Jochen : [35:35] Yeah yeah.

Jamie : [35:36] Because… we’re we’re running a little bit out of time, but I would love to know some of the challenges with supporting both Framework and .NET. Like is it just a case of, “hey, this is a .NET standard library, so good look to you?” Or are you actually building things in that do like, you know, checks, “oh, if we’re in .NET Framework land do it this way. If we’re in .NET… modern .NET land dot a land do it this way?”

Jochen : [36:05] Yeah absolutely. I’m using the how is it called? Pre-processor directives. So I’m really using the monikers for targeting net472 or netstandard2.0 for certain functionalities. Especially about using the different namespaces with the base class libraries from .NET, from the .NET framework, compared to .NET, .NET Core, as it was used to [be] called.

Jochen : [36:40] One of the best examples, I would say, is the way how the HttpClient is actually initiated. Because there’s a situation that for .NET Framework, I cannot use features like HTTP/3 or the QUIC protocol. There are other things where I need to explicitly activate between .NET Framework or .NET Standard compared to .NET 6, 8, or 9. And yeah, as I said, I use pre-compile directives in order to have my conditional code execution things like that.

Jochen : [37:31] The other interesting part that I had to look into is about the deserialization of the response. There is actually that dotnet 6, 8, and 9 have better ways to do that. So I had to also then put in the the condition about how to do things, in certain ways, because it’s also that, in .NET framework, you don’t get the async functionality in the JSON serialiser type. So I had to work around that; whereas in .NET 6 I can just say, “okay, take the generic type, read from JSON async, and give me then the response back.”

Jochen : [38:27] Another example is what gave me a couple of issues was about the streaming responses, because there I can then easily use the interface IAsyncEnumerable. So this is native in .NET 6 and higher, whereas for the .NET Framework, I had to use an additional NuGet package in order to get this functionality in .NET Framework. And even then in my project file, there are really like conditional blocks about what to add for which target framework and what to ignore. Certain settings are specific then to .NET 6 and plus, whereas others are specific then to .NET framework.


A Request To You All

If you're enjoying this show, would you mind sharing it with a colleague? Check your podcatcher for a link to show notes, which has an embedded player within it and a transcription and all that stuff, and share that link with them. I'd really appreciate it if you could indeed share the show.

But if you'd like other ways to support it, you could:

  • Leave a rating or review on your podcatcher of choice
  • Consider buying the show a coffee
    • The BuyMeACoffee link is available on each episode's show notes page
    • This is a one-off financial support option
  • Become a patron
    • This is a monthly subscription-based financial support option
    • And a link to that is included on each episode's show notes page as well

I would love it if you would share the show with a friend or colleague or leave a rating or review. The other options are completely up to you, and are not required at all to continue enjoying the show.

Anyway, let's get back to it.


Jamie : [39:28] Right. And, I guess. wrangling that is pretty hairy, right? Because I can imagine that, unless you’re like… okay just real quick then: is there a… have you got, like, compile time checks to make sure that folks aren’t setting it up as if they’re using .NET 6 but are using .NET Framework? Or is that something that you’re able to just say, “look if you set it up wrong, that’s kind of on you, dude”?

Jochen : [39:55] No. I mean, you don’t have to to be supporting multiple targeting frameworks. You can just say, “hey, this is my application running in .NET 8,” and you get then the .NET 8 targeting NuGet package or assembly from the library, and it is as you would expect it. So it’s really, i’m using these pre-compile directives and anything that would, let’s say, be specific for .NET Framework is just not even in the assembly that you’re using.

Jamie : [40:29] Right, okay get it now. I think I confused myself thought you were shipping one assembly and so beingable to detect that at runtime

Jochen : [40:39] No, no. I use the, what is it, target frameworks–so plural–directive that you use in your project file. And then it’s just semicolon separated, then the different monikers about the frameworks that you want to target. So when you look into the NuGet package manager, you will actually then see that the NuGet package supports .NET Framework 4.72, .NET Standard 2.0, .NET 6, .NET 8, and now also then .NET 9.

Jamie : [41:21] Nice, nice. Yeah, I confused myself there. I do apologise .

Jamie : [41:27] Okay so you’re shipping a set of different assemblies, using the target framework monikers , which if folks don’t know about you should totally look into your C# project file and you’ll see an element called TargetFramework. Throw an s onto that so that it’s TargetFrameworks and you can, like Joki said, you can um semi-colon delimit a whole bunch of them, as long as you have the right SDKs installed onto your computer, when you do a release build it will produce all of the different versions of the binaries which is pretty dang cool.

Jochen : [42:05] It is it is.

Jamie : [42:06] Excellent. Okay so we are running a little low on time. So I was wondering, what are some of the ways that folks can maybe learn a little bit more about the the Google Gemini SDK that you’ve written for .NET? And maybe different ways that folks can sort of catch up with you, like you know, if they’ve got a burning question about, “hey Joki, I looked into this, I’ve done this thing, I’ve gotten stuck.” Are you available for people just to ask you? I don’t mean like let’s spend hours supporting you, but like, “I have this burning question, it will take 30 seconds to answer,” you know. Is it X–that was formerly Twitter–Is it LinkedIn? How do folks reach out?

Jochen : [42:51] Well, first starting point is get into your IDE, whether it is Visual Studio, Visual Studio Code, or JetBrains Rider. Get the NuGet package, which is Mscc.GenerativeAI. Install the package. It’s fully documented based on XML docs, so you get IntelliSense. It’s fully there. Other than that, it’s open source, hosted on GitHub, And I hope, I think, Jamie, you’re going to provide these links then in the podcast notes.

Jochen : [43:27] It’s on GitHub. So mscraftsman/generative-ai, which is also in the, on the NuGet gallery, you get the link for the project source. The readme, I try to put in the available functionalities and features as well as various examples that get you up and running: how you can use it against Google AI using an API key, but also how you can do the authentication against Vertex AI. The samples are then identical because you just need to do like the first step where you decide whether it is API key based or whether it is project based all the rest is then identical.

Jochen : [44:19] Otherwise, if you want to reach out to myself directly, you can catch me on all common social media networks, whether it is X, Mastodon, Blue Sky. Facebook, not really, because there’s more private stuff. But other than that, I also have a Google Developer Profile. The handle usually is JKirstaetter. So just my last name. with the J in front. And yeah, feel free to ping me if you have any questions, if you would like to have some assistance regarding that you get stuck with some of your approaches that you like to do.

Jochen : [45:07] And of course, on GitHub, you can create an issue there if you have something, if there’s suggestions. And I will take it from there. And actually, I have to say, with GitHub, I’m really, really happy as well, because just recently that Microsoft announced their new package, Microsoft.Extensions.AI, and I think Stephen Toub was doing a session about this regarding .NET, during the .NET conference for .NET 9. Is that I also implemented, the Microsoft-specific interfaces so that you can use Gemini directly based on how Microsoft.Extensions.AI is working. So same package. For different kind of approaches: so direct Google Gemini API, you can use it together with Open AI API, and you can also using it the Microsoft way about using the Microsoft.Extensions.AI interfaces.

Jochen : [46:23] So I hope that is one of the fastest and simplest way about how to integrate and use Gemini in your .NET applications.

Jamie : [46:35] Nice. I like that, I like that. Because the more information we have about how to do stuff, the the easier it will be. And, like you said about being able to implement the interfaces that they’ve brought in from the Microsoft.Extensions.AI project, that means that it’s even easier to get started, right. I love it, love it.

Jochen : [46:59] Thank you. Yeah KISS, keep it simple and stupid.

Jamie : [47:05] That’s it. That’s exactly it. Awesome.

Jamie : [47:11] Well Joki. I have really enjoyed having this conversation with you today. I am very much a–I know that there are some SDKs by Google for .NET devs, but I haven’t done hardly any .NET on Google; or leveraging Google Cloud stuff with .NET. So this has really sort of jump-started my brain and got me thinking, “what can I do that can leverage some Google AI?” And maybe I can talk to that and talk to ChatGPT, and have them sort of both help me in my app that I’m building. So I really appreciate it. Thank you very much for being on the show.

Jochen : [47:50] Jamie, thank you so much for having me. It was a pleasure talking to you. And yeah, I hope some valuable information for your listeners on the podcast.

Jamie : [48:00] Absolutely, absolutely. Thank you very much.

Wrapping Up

Thank you for listening to this episode of The Modern .NET Show with me, Jamie Taylor. I’d like to thank this episode’s guest for graciously sharing their time, expertise, and knowledge.

Be sure to check out the show notes for a bunch of links to some of the stuff that we covered, and full transcription of the interview. The show notes, as always, can be found at the podcast's website, and there will be a link directly to them in your podcatcher.

And don’t forget to spread the word, leave a rating or review on your podcatcher of choice—head over to dotnetcore.show/review for ways to do that—reach out via our contact page, or join our discord server at dotnetcore.show/discord—all of which are linked in the show notes.

But above all, I hope you have a fantastic rest of your day, and I hope that I’ll see you again, next time for more .NET goodness.

I will see you again real soon. See you later folks.

Follow the show

You can find the show on any of these places