The Modern .NET Show

S08E16 - IoT and .NET nanoFramwork: Andy Clark on Building Beyond the Limits

Sponsors

Support for this episode of The Modern .NET Show comes from the following sponsors. Please take a moment to learn more about their products and services:

Please also see the full sponsor message(s) in the episode transcription for more details of their products and services, and offers exclusive to listeners of The Modern .NET Show.

Thank you to the sponsors for supporting the show.

Embedded Player

S08E16 - IoT and .NET nanoFramwork: Andy Clark on Building Beyond the Limits
The Modern .NET Show

S08E16 - IoT and .NET nanoFramwork: Andy Clark on Building Beyond the Limits

Supporting The Show

If this episode was interesting or useful to you, please consider supporting the show with one of the above options.

Episode Summary

This episode centres around Andy Clark’s journey into the world of embedded systems and the .NET nanoFramework. Andy shared his programming origins, starting with the ZX Spectrum and progressing through work on cruise and container ships to his current focus on warehouse logistics – all utilising message-based systems. This experience led him to experiment with electromechanical devices, ultimately leading him to explore .NET nanoFramework as a means of building projects with a higher level of abstraction than traditional microcontroller development offered. He enjoys building projects as prizes for his team, fostering a culture of experimentation and fun within the workplace.

.NET nanoFramework provides a smaller version of the .NET framework designed to run on microcontrollers, abstracting away many of the complexities of low-level programming. Previously, developers working with microcontrollers often had to contend with assembly language and direct hardware manipulation. Andy highlighted how nanoFramework simplifies this process, allowing developers to utilise familiar C# syntax and NuGet packages, even on resource-constrained devices. This dramatically reduces the learning curve and enables rapid prototyping and development, bypassing the need for intricate, device-specific toolchains and SDKs.

A key theme was the power of constraints in fostering creativity. Both Jamie and Andy discussed how limitations in memory and processing power can force developers to think differently and prioritise efficient solutions. Drawing parallels to early computing and even cooking, they emphasised that working within restrictions can often yield more innovative and elegant designs. Andy explained that the need to optimise for low resources pushed him to look into techniques such as code generation and carefully managing memory usage, skills applicable to larger-scale software development.

The conversation also touched upon the importance of community support in the .NET nanoFramework ecosystem. Andy highlighted the collaborative nature of the project, with many contributors building drivers and libraries for a growing range of hardware platforms. While specific device support requires some initial setup, pre-built images and the availability of community contributions can significantly ease the process. This collaborative approach also allows developers to focus on their applications without having to spend time on low-level hardware intricacies, speeding up development cycles.

Episode Transcription

But I was looking for something that I could give to some of my team members as prize for a hackathon that they completed and I basically I didn’t want to didn’t want to force them down that route of having to solder their own stuff. So I found um a little board with a a display on it um and various other capabilities um and then and realised that I could put the nano framework on it.

- Andy Clark

Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I’m your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.

Today, we’re joined by Andy Clark to talk about .NET Nanoframework, how he came to find out about it (pro tip: there’s a wonderful circular moment in the episode, see if you can spot it), and why he chose to look into embedded systems in the first place.

And I think it’s the the same kind of applies to software which is if you’re doing the same things over and over again you almost kind of blinker yourself into working in particular ways.

- Andy Clark

Along the way, we talked about the importance of both constraints on software design, and in looking around at what other systems and frameworks do and use. We also took a walk down memory lane for me, as what we were talking about reminded me of my college days.

So let’s sit back, open up a terminal, type in dotnet new podcast and we’ll dive into the core of Modern .NET.

Jamie : Andy, welcome to the show. It has been a while since we last chatted. I don’t think you’ll remember, but we did chat in person once - you came up to Leeds and gave a talk on .NET things - and I specifically remember you mentioned, “Oh, I work for this particular employer,” and I thought, right, I’ve got to remember that, because that’s important. I don’t necessarily remember why it was important, but it was at the time. It’s good to be chatting with you again. Thanks, Jeremy.

Before we get started with the topic at hand, which is a very interesting one, I have to admit - would you be willing to give the folks a brief intro to yourself? You don’t have to go as deep as, well, I was born in a log cabin, but things like I’ve been programming since this time and my main interests are this, that, and the other.

Andy : I have been programming since I was very young. I started out on things like the ZX Spectrum, then spent twenty-plus years with a company that worked on cruise ships and container ships, which is all very interesting. I’m now into a completely different type of shipping - we ship products from warehouses, and our team of developers uses message-based systems to get all the relevant data across our systems.

In my spare time I build electromechanical devices, and there’s a crossover between the work and the hobby, which is what we’re having a chat about today.

Jamie : That was wonderful, Andy. I remember a friend of mine who is an electrical engineer - he’ll probably yell at me and tell me he’s an electronic engineer, because I always get the two mixed up. He worked for a while in shipping, building the electronics and electrical systems for luxury yachts, and he said it is a completely different world - no matter what you think your workday is like, it is absolutely 100% completely different.

I can imagine that even just the first part of your history there would be different. The logistics and shipping industry - as you said, shipping things around the world - that has its own challenges, right?

Andy : Totally. Working with a lot of third parties involves a surprising amount of information just to get one parcel to the right place.

Jamie : I can imagine. It’s probably not in this vein, but when I order something from an online retailer, it has to go from a warehouse into consignment, into a bulk order to be shipped to a local-ish area, then maybe from there to a sorting office, then from there to a person who will deliver it, and finally to me. I don’t think about any of that when I hit “buy now,” do I?

Andy : No, there are a lot of steps in the process, and keeping track of all of that is part of the challenge.

Jamie : It’s a whole interesting side of the world that most people don’t think about. I’d love to continue talking about that, but that’s not our topic for today.

If folks haven’t caught on already - it’s in the episode title - we’re going to be talking about .NET nanoFramework, experimentation with embedded systems, and similar things. We did have a couple of people on talking about nanoFramework and IoT in the past, but for folks who haven’t heard those episodes, could you give a 10,000-foot overview? If you were saying to someone, “Hey, this is my project and this is how it works,” what are you saying to them?

Andy : So .NET nanoFramework, as the name suggests, is a smaller version of .NET Framework. It’s designed to run on microcontrollers. Microcontrollers are basically the brains behind all of the small devices you might find around the house - from high-end washing machines and dishwashers through to all the classic voice-activated assistants, which all have various types of microcontrollers in them.

The big difference between a microcontroller and a microprocessor is that typically the package has a bunch of other functionality contained within it, such as input/output and memory management. In the case of the one I was looking at, it also has a Wi-Fi capability, so it could talk to the internet.

Jamie : So we’ve got this system that has a microcontroller in it, which is different to a microprocessor because I don’t have to worry about a whole operating system and memory management. My normal cloud-based operations, my normal forms-over-data work, my normal day-job work - even perhaps apps on my phone - I don’t typically have to worry about those things, because the operating system takes care of it for me.

But if I’m dealing with a microcontroller, am I doing literally everything myself? Freeing memory, worrying about memory management as you said - am I also worried about overloading the CPU? I don’t even know if this is the right term anymore, because it’s been so long since I did my computer science degree - thrashing or flip-flopping. Is that the stuff I need to worry about?

Andy : Normally, yes. The nice thing about the nanoFramework is that it takes a lot of that responsibility away. When I first started out with microcontrollers, I was using assembly language statements to make my software work, so you’d end up with a page of code and all you’d get at the end of it was “Hello World” coming out of a serial port.

Over the years, things have got a bit more sophisticated. We’d typically see C used on these microcontrollers, and quite often MicroPython as well. By putting the nanoFramework on it, things like memory management and abstractions around a lot of the hardware are already in place for you, so you don’t have to worry about interrupt handlers nearly as much as you would if you were bare-metaling it, as they call it when you write directly to the raw processor.

Jamie : You’ve just unlocked a memory for me. Back at college - for our international audience, that’s sixteen to eighteen year olds here in the UK - I did electronics and telecommunications, and we had a whole year devoted to using assembler to program against 68000 boards, which had a 68k Motorola CPU on a little dev board, and you had to write everything in assembly.

Everything you were saying there about pages and pages of code just to do Hello World, or just to turn an LED on - you didn’t say LED, but that was one of the things I had to do. The only thing I remember about each of those program listings was that the very last thing you had to do was trap hash 11, and I do not know why. That was all in assembler.

Then we moved on to using PICs with low-level C. If any of your credit cards have one of those chips for chip and pin, it’s essentially what a PIC is. A SIM card is something along those lines too. You’ve just unlocked all of these memories.

Andy : I also worked on PIC devices. One of the key differences between those early devices and some of the more modern ones is that we’re not only getting abstraction at the software level, we’re also getting it at the hardware level.

Rather than what they call bit toggling - where you’re basically saying “go high, go low” to communicate something - you can now say at the hardware level, “Hey, serial port, do some stuff for me,” and the serial port takes on those responsibilities, rather than having to literally go high and low on various devices.

Jamie : You’re continuing to unlock more and more memories. I remember with the 68k, we had to program over a parallel port in DOS. We had to drop out of Windows - this was in the early 2000s, using Windows XP - and not by opening a DOS window, but actually dropping out entirely. We were doing the bit-flipping: you’d send a binary value to a specific register that would then cause a bit to flip so that something could be read or written.

Then, like you said, the hardware abstractions came along - you’re just saying to the port, to the device, “Hey, give me the value of this,” or “Set the value of this.” It’s completely alien from what we do as .NET developers every day. We don’t have to worry about setting up memory mapping for our apps, and if we’re using full .NET - if that’s the right term - we don’t have to create our own HTTP clients or TCP clients to do stuff over the wire. But in that IoT world, pre-.NET nanoFramework, you did have to, right?

Andy : Exactly. They call it a TCP stack, but even those were fairly low-level, so you’d need to know the start of the message, the end of the message, and send all the data in between. No HTTP GET, no JSON, no stream capabilities on those earlier devices.

Jamie : I guess just sending and receiving JSON would be too much anyway. Even if you could write your own code to send it, receiving a text document and then parsing it - on those early devices, that is way out of your league.

Andy : Binary protocols were pretty much the standard people would use, so literally each byte had a meaning, and quite often each bit within those bytes would have a meaning too. You could be effectively passing several different pieces of information with each byte that went across the wire.

To a large extent, that’s still the case. Although the processor end is a lot more sophisticated, the target ends are still effectively quite low-spec devices. If you’re talking to a temperature sensor, for instance, you’d be setting a register and then checking a different register to see the current value for that temperature, and quite often you have to worry about what order the bytes are in and how that temperature is represented. There is still quite a lot of low-level work to worry about when dealing with microcontrollers.

Jamie : Even in the modern stuff, as you said. Whilst you’re using slightly more modern processors - maybe an ARM or something like that, we can get into that in a moment - the other components around it don’t need to change. There’s no need to change the temperature sensor, or a moisture sensor, or a video sensor, because they’ve worked forever. So let’s just make the processor speak their language, right?

Andy : Exactly. The advantage of using a high-level language like C# is that if you want to put those abstractions in place, you can do so. Your code reads like “start timer, get temperature, stop timer” rather than low-level statements. You can still have nice, easy-to-read code on your device, and that was actually the reason I got into .NET nanoFramework.

Jamie : So one of the reasons you got into it was because you could abstract all of that away. That makes sense, because then you’re not writing hundreds of lines of boilerplate code - bear with me whilst I make my IoT device measure the temperature of the room, just writing code for fifteen minutes - when actually you could have got up, walked over to your thermometer, and just read the temperature.

Andy : You find the right level of abstraction. Obviously you wouldn’t want to make it too many levels deep, because at the end of the day these devices are still limited in various ways.

Just to recap on how I got into the whole thing: I like to mess about with electronics, solder bits and pieces, and build projects from the ground up. I was looking for something to give to some of my team members as a prize for a hackathon they’d completed, and I didn’t want to force them down the route of having to solder their own stuff.

I found a little board with a display on it and various other capabilities, and realised I could put the nanoFramework on it. I ended up with this device with a touch display - I think it’s also got audio and Wi-Fi - that I could present to them and say, “Here you go, go write C# code and have some fun with this.”

Jamie : There is an episode coming out slightly before this one where I talked to someone who built something for fun. I feel like, especially in the enterprise, there’s a little less of that - fewer people actually building things for fun.

First off, regardless of whether those folks in that team appreciated it, I appreciate it, right? Because we need to bring some fun back into what we do. It is so easy to fall into the trap of “I will make a forms-over-data app, I will build an API, it will be enterprisey, I will add unit tests,” and so on. Life gets very businesslike - I’m reticent to say the word boring, but you know what I mean. There’s not much in the way of immediate blasts of joy.

Being able to turn to someone and say, “Here’s this really cool thing - go build something fun with it,” and just watch them go off and do it - I think that’s a wonderful thing in itself.

Andy : That was the idea: to provide a platform for experimentation with these devices, rather than having people go away and learn a whole new paradigm of different things and techniques, which is typically what happens in the microcontroller world. You pick up a device, you get a software development kit with it, and then you have to understand what that is and build the tool chains, which might be different from the ones you’re used to.

It’s a lot of extra stuff to learn. If you’re doing that full time, that’s fine, because you learn it once and then you’re using it daily. But in the business world, remembering which SDK to use and which tool to use with particular devices can be quite complicated.

Jamie : Definitely. At the other side of it, you’ve built something really cool - and you’ve likely built something very unique. You might put on one of those GPIO boards that sit on top of the device with a little screen on it. Like you said, the one you picked had a screen on it, but you might be building something that shows you the time, or the number of lines in this commit, or something like that. That’s always really cool.

I think it’s more fun to do those projects, and - this is a very hot take - more important than doing the standard code-cutters, because it brings the joy back into the work. Otherwise it becomes monotonous. If you can take a break and do something completely different, it helps reset your brain, and it also introduces you to new ways of thinking about the work you do on a daily basis.

Andy : There’s a tenuous connection with the kata, actually. I used to do karate, and my instructor recommended that when people got to the highest level - black belt and above - they should go off on a completely different tangent and try a different discipline, like judo or taekwondo, and then come back. The idea being that the thing they were focusing on would actually improve as a result.

I think the same applies to software. If you’re doing the same things over and over again, you blinker yourself into working in particular ways. By going off and experimenting - whether that’s a back-end developer trying front-end, or experimenting with microcontrollers - it gives you a different perspective, which you can then bring back and apply to your day-to-day work.

Jamie : I 100% agree. I do the same thing with my reading. I spend time reading about what other industries - not just other technologies within the technology space, but other industries entirely - are doing. Over the last few years that’s included looking into empathy, sympathy, and what are sometimes called the soft skills (though I prefer “interpersonal skills”), and then bringing that back to my practice.

Like you said, it lets you see things from a different perspective - why do we do things the way we do, and how do others do things differently? How might that fit better?

Andy : It can give you that context for what you’re doing. The other thing I’ve found about these little boards is they have tight constraints on them, and we’re not always used to dealing with that on a day-to-day basis. Just the other day I was dealing with an indexing application and went, “Oh, I’ll just assign the pod three gigabytes of memory.” The devices I’ve been working with have about 512 kilobytes of memory, so it really is tightly constrained compared to what we’re used to.

Jamie : I think that breeds innovation. One of the things people often find fascinating - and it is a proper tangent, so I apologise, but I will make it relevant - is the fact that Doom runs everywhere, right? The reason for that is a wonderful book by Fabien Sanglard called “Game Engine Black Book: DOOM.” It’s part of a series about the early id Software video game engines, and this one focuses specifically on the design choices John Carmack made whilst building the engine.

The reason it’s so eminently portable is because he abstracted away the constraints of those early PCs from the very early ’90s. The reason a million Doom clones exist and a million different Doom engines exist is because they’re all programming to the same constraints via those same abstractions. So the Doom clone developer doesn’t have to worry about the hardware constraints, but the person porting it to another device does.

I genuinely feel that working to those constraints can make you a better software developer. I was talking to someone just yesterday about music - I’m very much an amateur musician in that I play bass guitar. Someone asked me, “How many effects pedals do you have?” I said, “None.” They said, “Can’t you be more creative with effects pedals?” I said that what would happen is I’d spend a couple of hours fiddling around with them, listening to the cool sounds, which would be fun - but if my goal is to learn a piece of music, I won’t learn the piece of music. I’ll spend hours fiddling around with effects pedals and VST plugins or whatever.

I feel that applying constraints allows you to be more creative in how you think about things. Removing the effects pedals, removing the VST plugins, and just focusing on the constraint of “I have this thing and I need to make it sound like that” - or, in your case, “I have 512 kilobytes of memory and I need to make this thing do that.”

Andy : For me, one of the biggest challenges was the screen. It’s 320 by 240 I think, but once you allow for three colours and enough resolution, that’s nearly all your memory gone. So you can’t hold the entire screen in memory, which is a fairly common technique for writing to screen devices - you take a bitmap copy of it, draw on it, and then send it back.

What I ended up having to do was write chunks of screen at a time, copying bits and pieces in. That raises another interesting constraint: on desktop apps, fonts are available to you without much thought, but on these devices a font is an optional extra. You have to take your font and compile it into something called the TinyFont standard. You can even go as far as saying, “I only need the letters from A to Z in capitals,” so you’d only have 26 glyphs. You pick your size to match your requirement, create your font file, and then you can use it for writing to the screen. Luckily, you don’t have to draw out the individual characters - that’s been abstracted away - but you basically say, “Here’s the font and here’s the text, put it on the screen in this position.”

Jamie : That makes me think of all our friends who use Font Awesome or any sprite-sheet-based web design, where they just download the equivalent of a thousand times more data and then go, “Well, I only need this one bit from this icon file, or this sprite sheet, but I’ll download the whole thing and just snip out the bit I want and position it on screen."

The inefficiency - that might not be the right word, but it’s the one I’m reaching for - of having to download megabytes of data to show a small thing on screen. Whereas you have to plan in advance: what messages do I want to display, which characters or glyphs do I need, how big must they be? It’s a completely different world.

Andy : You picked up a good example there - you’d probably have something like a sprite sheet on these devices so that you don’t have to keep repeating your graphic elements. But you’d need to be careful about how large that sheet was, because otherwise it could blow the resources on the machine.

One thing that is quite useful, though, is that you have your programme space and then you have your RAM space. If things are not changing, you effectively write them into the programme side - the flash RAM is where the programme lives, and the static RAM is where your variables and temporary storage live. There are techniques you can use to push things into that static side, which saves you a bunch of storage.

Jamie : There’s a fantastic series of videos by a YouTuber - I think it’s “The 1000th Coin” - through whom I’ve picked up that they’re a .NET dev. They talk about how the NES did those kinds of things: data that’s never going to change is stored as part of the programme rather than in volatile RAM space, because you need all of that volatile RAM space for game assets and the like. I’ve probably got some of the words wrong, but hopefully you get the idea.

Andy : Back when I first started programming, I was doing a little bit of game programming, and one technique you can easily apply on these modern microcontrollers is: if you need the same image rotated or mirrored, you can store it once and do the rotation and mirroring on the fly. The processor is a reasonably good spec - around a 200 megahertz processor - so flipping an 8x8 icon is not a big deal, whereas having multiple copies of the data might actually be an issue.

Jamie : At that point you’re making trade-offs: is it going to be easier to write this as code, or to store a separate version? Maybe you do have a series of glyphs or sprites that need to be rotated, but during testing you realise that rotating them is too demanding on the CPU to complete quickly enough for it to be seamless for the user. At that point, you take the hit of storing a second pre-rotated version, right?

Andy : Potentially you could make those optimisations as you go along. What I was thinking about was animation. You mentioned Doom, but getting the frame rates right on these devices could be quite challenging. Although the processor might be quite quick, you also have to think about the bandwidth between the processor and the display. If the display isn’t a fast one, then no matter how quickly you can get data out of the processor, if you can’t get it into the display fast enough, you’re not going to be able to do sophisticated animations.

Jamie : There are all of these constraints to think about. Like you said - Kubernetes pods, they need to do some processing, so just throw three gigabytes of RAM at them. I’m looking at the laptop I’m using right now and it’s got 128 gigabytes of RAM. How is that even possible? I remember as a kid getting a computer with 120 megabytes of RAM and thinking, wow, I can store the world on this computer.

Andy : It’s similar - I started out with a 48K Spectrum, even smaller than the devices I’m working on at the moment. For anybody brought up in the eighties and nineties, there are a lot of parallels between those early computers and many of these microcontroller devices.

I think probably the biggest difference is networking - we just didn’t really have that back then, whereas even some very low-end microcontrollers now have built-in Wi-Fi stacks.

My next project is probably going to involve a protocol called Zigbee, which is used for transmitting things like temperature data. Hopefully I’ll be able to get .NET nanoFramework onto a different microcontroller and get that project working: an automated greenhouse where we send data from the greenhouse to an e-ink display. The idea with e-ink is that you write to the screen and then kill the power, and the image stays on the screen.

I actually ended up experimenting with e-ink displays at work for some of the stores, and I’ve since brought that back into my hobbies for things around the home.

Jamie : It’s really interesting you mentioned Zigbee - I’ll talk to you about that offline, because I don’t need to bore the listeners with it, but there are some interesting things going on between me and Zigbee.

E-ink is fantastic. For folks who haven’t used it, it’s literally as Andy described: you switch the screen on, write to it, switch it off, and it just stays there. I’ve seen them at bus stops, in stores - I’m actually writing on an e-ink screen as we record this. I’ve got several e-ink devices for reading books. They are genius, right? Whoever created them is a fantastic person. Because you only turn the screen on to update it, they require almost no power, and they can hold the image practically indefinitely.

Andy : I don’t know the exact duration, but it’s in the years rather than the hours. Likewise with battery life - I’m hoping my temperature probe will run for maybe a year, possibly two, on a single coin cell battery. Very low power usage. That’s also where Zigbee comes in, because it’s a very low-power protocol.

Jamie : Hopefully this won’t be too much of a swerve, but that’s where .NET nanoFramework comes in, I guess.

Andy : They’ve definitely already got Wi-Fi support written. I don’t know if they’ve done Zigbee yet - I might end up having to do the low-level stuff myself. One of the low-level drivers for the touch screen was quite a challenge, but eventually we got that back into the community, so it’s now available for other people to use if they’re working with the same touch screen technology.

Jamie : I guess, first off, some parts of the stack are already written as community contributions - it’s open source, right? So if I know something about the Zigbee stack, I could write a module, plug-in, or class library that could help potentially hundreds of thousands of people use Zigbee on their IoT or embedded devices with .NET nanoFramework, right?

Andy :

It’s the same terminology as we’re used to - it’d be a class library, or possibly even a NuGet package, that people could use in their projects. That’s one of the nice things about the nanoFramework: there are a few extra steps, but a lot of it is very similar to what you’re used to. If you want to add extra functionality, you bring up your package manager and ask for the drawing library, or SPI - one of the protocols for talking to devices - and you just say, “I want to use that now,” add it in, and off you go.

There is one big gotcha, which is that it’s quite version-sensitive. Some of the code that ends up on the device is basically version-locked, so when you pick your packages you have to get the matching package for what’s already on the machine. They’ve drawn the line such that the low-level stuff is actually written in C or using the SDKs from the particular microcontroller vendors, and then some of it is written in C#. All that complexity is handled by the people doing the platform work, so adding something new - like a new protocol - can be done in native C# code, stuff that people are already used to.

I think that’s what I like about it from an IoT perspective. You could potentially have a full stack - from the sensors through the controllers and hubs out to the front end - all written in C#, which would be brilliant.

Jamie : So, that feeds into another thought I had. Because .NET nanoFramework - not your words, mine - is abstracting some of those details away and you’re locked to certain versions, I’m inferring further: because there are certain versions of things you can use, some boffin somewhere has already written the low-level stuff and implemented it, perhaps in C or using the SDK for that device.

Does that mean I have to use very specific devices when building software with .NET nanoFramework? Is there some kind of one-time setup that says, “This project is for an Arduino XYZ and it has this particular breadboard attached”? Or is it just a case of, “Whatever - the tooling team will figure it out and load the right stuff”? You may or may not know that, of course.

Andy : I did have to find this out myself, actually. When I started out, I thought, “Oh, it’s a new device, so I’ll need to build new firmware from scratch.” I followed all the instructions on how to build the firmware and ended up with my firmware image, which I was ready to put onto the machine. Just as I got to the step where it said to flash it onto the machine, I noticed a note saying, “Oh, and here are all the pre-built images you can use.” Oh no.

So I ended up using one of the community packages, and that sorted me out. It’s very similar to the Arduino world - there are a lot of people writing the low-level stuff and dealing with the actual processors and SDKs, and then there are people who are more consumers of that, writing the higher-level stuff that provides the apps and devices you see out in the world. It truly is a full community effort.

Jamie : That’s really quite cool.

Andy : I’m not sure there are any big vendors involved - it’s all people coordinating, making sure the documentation and everything is in place, and largely doing it for the fun of it. Though I imagine they’re also getting other benefits from it or using it themselves.

Just to loop back to what you said about specific devices: there are a number of community images you can use, but those are locked to a subset of microcontrollers. If a completely new microcontroller came out, there would be some work needed to get the low-level stuff in place. I’m using a microcontroller called the ESP32, and that comes in lots of different variants. Typically, if a new variant comes out, it’s probably just a case of setting a few flags and parameters to produce a new version. But if somebody brought out a totally new tech stack - say, a RISC-V processor or something like that - then somebody would have to do a lot of work to get .NET nanoFramework up and running on it.

Jamie : So there is a little bit of background work that someone needs to do, but for the majority of devices that the majority of .NET nanoFramework users are likely to be using - that’s a slightly confusing sentence - there is probably already a firmware image that handles the setup and the runtime.

Andy : There’s an ever-increasing list of devices being supported. I think a lot of people can get in there and get it up and running on at least one device they have easy access to. They may not have picked their microcontroller specifically for this purpose, but it’s definitely not limited to one platform - there are multiple microcontrollers supported.

Jamie : That makes sense - I can pick from a range of devices that, like you said, I have access to and can build my system on. You’re using the ESP32, which is a whole suite of devices. Was it simply a case of, “Oh, there’s one of these at my local store,” or was it, “I’ve done loads of research and for the type of project I want - the temperature sensor, the greenhouse, Zigbee - I’m most likely to succeed with an ESP32”?

What I’m trying to get at is: if someone is listening and thinking, “I want to try out .NET nanoFramework and I’ve got an idea in mind,” are they going to be limited by the devices available? Will they have to choose between, say, four different types of device? How do they choose the right one? Because it could be a considerable amount of money involved in buying these devices.

Andy : Money was actually the reason I ended up with this particular board. As I said, I was looking for prizes for the team - they gave me the budget and it wasn’t very much. So I started looking for interesting things I could give the team, spotted these little boards, dug into them in a bit more detail, and discovered that people had been writing their own software for them.

I thought, “I’ll get them for the team.” I actually think I heard about it on this very show - that .NET nanoFramework was a thing - and wondered, “Oh, will nanoFramework work on this board I bought?” So I discovered it that way.

Looping back to your question about selection: because of the nature of .NET nanoFramework, it does require a fairly high-spec processor compared to a lot of what’s out there. If you’re just blinking an LED, there are probably processors available at sub-cent prices to get that up and running. But the processors they primarily support - there are probably about three or four key ones. I think this is the STM32 and the ESP - I don’t know if they do ARM just yet. I know there have been some discussions around it. There’s a board called the Raspberry Pi Pico that has ARM on it, which a lot of people are interested in.

Jamie : It’s really cool that there’s a circular connection between .NET nanoFramework and this show - that would have been the episode where I talked to José, creator of .NET nanoFramework.

I wonder if I’ve overstepped by asking about ARM, because I do know there’s a licensing agreement that hardware vendors have to sign when they release an ARM device. Maybe ARM is a bit much to hope for, but who knows what the next year or so will look like.

With the increasing hardware shortages and the increasingly inflated prices of RAM and things like that due to AI-based data centres, maybe embedded systems is the future. Maybe reusing slightly lower-spec hardware to get the work done is the way forward. Maybe we need to learn to constrain our apps a little more, right?

Andy : It comes back to that point you were making earlier - constraints lead to creative solutions. There are actually people running AI-type inference on microcontrollers. Not the generation of models, but the use of them can be run on microcontrollers. Basic things like voice recognition, or even a little still-video processing, would be possible on these devices. There might be a bit of lag in a lot of cases, so it might not be the fastest thing ever, but it can definitely hold its own.

Jamie : So it is possible to do these things - you just need to… I’ve got this rant I’m working on about whether what we do is truly engineering, but the real strength of an engineer is to balance the pros and cons. Embedded devices are worth looking at from that perspective.

You said you can’t run generative models on the edge where the embedded devices are, but you can do inference-based work from the outputs of those generative models. You might be able to do video but it’s a bit laggy - well, if you’re happy with laggy video, you can do it. You don’t need a thousand pounds’ worth of computer sitting in your back garden recording videos and detecting birds or whatever. You can use an embedded or IoT device. I keep mixing those terms up - maybe they’re different.

Andy : I’m thinking that detecting birds is probably a little too fast-moving to pick them up reliably. I do remember seeing a project quite a while back where somebody was counting the number of boats going past their house on the canal - I’m sure that would be well within the capabilities of IoT.

But I have a funny feeling that if you were trying to detect something like a blue tit flying down to a table, it’ll be gone by the time you detected it. Potentially, though, you could use a simpler model: if something which I know is not background appears in this video frame, start recording. You could then do offline processing and determine that, actually, that wasn’t a blue tit, so you can throw the recording away.

Jamie : That goes back to engineering a solution to the problem. We’ve got this device out there in this specific area, and we can’t capture or make decisions in real time. But, like you said, why not offload that to the equivalent of a supercomputer - which is the laptop we’re talking through right now? That can handle it easily.

I think that’s maybe where we need to go as an industry: more constraints, so we can actually focus on what we’re doing, rather than just building things that are potentially bloated and don’t necessarily solve the problem. I don’t know - I’m getting ranty again. I’m sorry, Andy.

Andy : No, I’m totally with you. I do get the issue of having gigabytes of data just to show a form on a webpage. The massive size of the tool chains feels more of an issue to me than slightly larger web pages. But there are potentially optimisation techniques and tricks you pick up from working with these smaller systems that you can apply back to the larger world - you go, “Ah, I can now make my website twice as fast.”

Jamie : Absolutely. We talk in modern .NET server-side work about breaking a method down into smaller, composable parts - well, that increases your stack frames. Without even having to ask you, I’d bet that having a really large stack frame on an embedded device with 512 kilobytes of memory isn’t going to fly, right?

Andy : I haven’t really thought about that specifically, but you’ve got to think about scopes and how long you keep things in memory. I’ll confess I haven’t done it on these little devices, but I have been streaming quite large text files - around 400 megabytes of text - and a lot of thought went into how to not keep it all in memory at once. We process it in chunks, pass it on to the next system, then process the next chunk. Techniques like that could be very useful in the embedded world.

Jamie : Do exactly what you need to do, then immediately clean up and prepare for the next thing - almost like how modern kitchens and restaurants work.

That’s the thing, right? Applying lessons from outside your work to your work. I’ve been watching a lot of videos by cooking YouTubers - restaurant YouTubers, perhaps - about how they actually do their day-to-day work, what a three-hour shift in a kitchen looks like. Just-in-time doesn’t work in a kitchen, right?

I watched a video just before we recorded about this, and they were talking about trying to quickly service all of these orders when suddenly an order came in that required egg mayonnaise. They didn’t know egg mayonnaise was on the menu, and it takes ten to fifteen minutes to make from raw ingredients. The whole chain slowed down by ten to fifteen minutes because one person had to make it from scratch - whereas if they’d known ahead of time, they could have made it ahead of time.

That fits with what we’re saying: if I know my processing pipeline needs to process a log that comes in, look for a very specific thing, then immediately send it on, I need to know ahead of time what that thing is. I might not be able to generalise it, because maybe I only have 512 kilobytes of RAM. The overhead of reflectively instantiating a new class, calling a method on it, and passing that through the pipeline - even just that overhead might be too much, right?

Andy : You’ve just given me a really interesting thought. As I said, you’re building with the same tools you use on a day-to-day basis. Things like source generators - where you’re pre-compiling to produce optimised code - are a perfectly reasonable approach even for .NET nanoFramework. All the Roslyn-type tools for creating and analysing code can all be there as well.

Jamie : It’s a wild world. If we go back before 2016 when José started creating .NET nanoFramework - if we go back to the mid-nineties, there are folks literally flipping bits in C or even assembler. If you’d said to them, “We’ll soon be able to use this entire runtime and write apps in almost human-readable language, and on top of that we’ll have source code generators that generate code for us at build time and take care of a whole bunch of what we need - and you could probably do metaprogramming with it too” - their heads would have exploded.

Just the innovation of it. We are - you are - standing on the shoulders of giants.

Andy : There’s innovation still happening in the nanoFramework world as well. I believe generics got added quite recently, and I’m quite a big fan of generics. The biggest place I use them day-to-day is logging - they open up a whole extra set of libraries and capabilities that maybe weren’t there before.

Jamie : We don’t really think about that, but if you take a language feature away, then the number of NuGet packages - or even things that existed pre-NuGet - that are no longer available to you is enormous. You’d have to reinvent the wheel quite a lot.

Andy :

Going back to the nineties, I was working on a completely different kind of embedded system - aerospace. That was all assembly, critical timing, the lot. A completely different world.


You know that moment when a technical concept finally clicks? That's what we're all about here at The Modern .NET Show.

We can stay independent thanks to listeners like you. If you've learned something valuable from the show, please consider joining our Patreon or BuyMeACoffee. You'll find links in the show notes.

We're a listener supported and (at times) ad supported production. So every bit of support that you can give makes a difference.

Thank you.


Jamie : Just thinking about the amount of effort that’s gone into all of this. So that I - so that I personally - can go out and buy a small device, an ESP32, and start making an LED blink. In C#. Oh my goodness. That greatly undersells what .NET nanoFramework can do and what you’ve been doing, of course. But we’ve got to start somewhere, right? We start with the blinky LED and then build up from there - two blinky LEDs. I’m only missing… I’m only missing…

Andy : If you can blink an LED, you can run a washing machine. That’s essentially what they do - they turn things on and off in a very defined sequence. The sensors in a basic washing machine are quite simple: is the drum spinning, is the door locked? Then you say, “Turn the motor on, make it spin,” or, “Turn the valve on and fill with water,” or “drain the water.” They call the blinking LED the “Hello World” of embedded applications.

Jamie : Quite right. If you can blink an LED, you can change the whole world of embedded applications.

One of the projects I had to work on at college - there were two that really stuck in my mind, both with the 68000 board. One of them was to simulate a washing machine, so it’s interesting you’ve brought that up. What I did was write something that would, at compile time, bring in a script that laid out the order of operations. I had a bank of eight LEDs - eight bits, so 256 possible combinations - I only needed about 12, right?

It was like: this one represents that it’s switched on, this one that the door is locked, this one that it’s spinning, this one that it’s filling with water, this one that it’s draining. I was able to simulate an entire wash and drain cycle with eight LEDs. It was fantastic.

Then the other project I remember: I had to use an air rifle to measure the speed of a pellet. I rigged up this arrangement - not in any way health-and-safety friendly, because I nearly shot myself in the head - with two strips of silver foil. It’s a simple test: you fire the gun and have your embedded system pass a voltage over both strips via crocodile clips. As soon as that voltage stops for either one of them, you either start or stop a timer. The device then knows how much time passed between breaking the first strip and breaking the second. You tell the embedded device how far apart the strips were, and then - speed is distance over time, right? You’ve got how fast the pellet is flying. You don’t need gigabytes of RAM to calculate that.

Andy : You might need the gigabytes if you wanted to graph the results over time, but you could definitely do a scientific experiment at that level. I think that’s potentially where .NET nanoFramework really shines: if you need to get something up and running quickly for an experiment or a test, and you don’t want to spend months developing and optimising it, you can get stuff up and running in a high-level language without needing to learn the intimate details of the microcontroller and its registers.

Jamie : Absolutely. As we start to come towards the end of our conversation, Andy - I feel like I could talk to you for hours about this, and maybe we will next time you’re up this way or I’m down that way. As we’re coming towards the end, I was wondering: what would you recommend as a first step for someone who’s heard about .NET nanoFramework, thinks it sounds awesome, and wants to build something with it? Let’s say they go and buy an ESP32 - how do they get started, even just doing the Hello World of embedded devices, which is blinking LEDs?

Andy : I would say get one of the pre-made development kits. If you go to the .NET nanoFramework website, there are a number of community images and built-in images, and some of them refer to quite specific hardware - those are probably your easiest entry point. I’m trying to remember the name - it’s something like M3 or something with a short name - and it’s basically a little board with a USB connector and a display.

On the software side, it’s pretty much what you have day-to-day: either Visual Studio Code or Visual Studio. You install a plugin and off you go.

Jamie : Super easy to get started. If you buy one of the devices listed in one of the getting-started guides, the guide will take you through the Hello World - blinking LEDs - and then, “Oh cool, this device also has a screen, so let’s do something with the screen. Maybe it has a button, so let’s do something with a button.” It shows you how to use all the different things on the device.

Are there any other quick hints and tips for getting started that folks ought to know? Then, how can people connect with you to keep an eye on what you’re up to in the public space?

Andy : If you’re not really into the electronics side of things, there are a lot of ready-built modules available. You mentioned hats - those are specifically for the Raspberry Pi, but they’re also called capes and other things on different platforms. A lot of the modules available for those other platforms can be used with .NET nanoFramework, though you might have to put in the effort of rewriting some of the code that communicates with them.

I’ve done that by looking at C code and thinking, “Well, if it’s written in C like this, then in C# it would look like that,” and a lot of it is almost line-for-line - just adding the occasional semicolon and so on. I would recommend: as well as buying an off-the-shelf dev kit, buy off-the-shelf sensors too. Rather than worrying about whether you’ve connected everything correctly, you just plug things in and get started. Once you’re a bit more confident that your software works, you can then swap your hardware out for hand-soldered equivalents.

The idea is that you’re not fighting on two fronts at once - you’re dealing with either just the software or just the hardware. With my own projects I always say I’m going to focus on software, hardware, or mechanical, but I nearly always end up using all three at once. I find myself wondering: is the motor turning? Is the driver not working? Is the software that talks to the driver not working?

If you can isolate your problem space, that makes things a lot easier.

As for where to contact me: my work space is TechieChap London - you’ll see me around GitHub and various other places. My hobby stuff is Workshop Shed. I’m on most of the social media platforms, there’s a blog, and there’s a GitHub repo where you can find all the good stuff.

Jamie : Amazing.

Wrapping Up

Thank you for listening to this episode of The Modern .NET Show with me, Jamie Taylor. I’d like to thank this episode’s guest for graciously sharing their time, expertise, and knowledge.

Be sure to check out the show notes for a bunch of links to some of the stuff that we covered, and full transcription of the interview. The show notes, as always, can be found at the podcast's website, and there will be a link directly to them in your podcatcher.

And don’t forget to spread the word, leave a rating or review on your podcatcher of choice—head over to dotnetcore.show/review for ways to do that—reach out via our contact page, or join our discord server at dotnetcore.show/discord—all of which are linked in the show notes.

But above all, I hope you have a fantastic rest of your day, and I hope that I’ll see you again, next time for more .NET goodness.

I will see you again real soon. See you later folks.

Follow the show

You can find the show on any of these places