Episode 125 - Mixed Reality, the Metaverse, and Making Magic Happen with Simon Jackson
The .NET Core Podcast
Episode 125 - Mixed Reality, the Metaverse, and Making Magic Happen with Simon Jackson
Supporting The Show
If this episode was interesting or useful to you, please consider supporting the show with one of the above options.
Mixed reality and the metaverse are rapidly evolving technologies that offer endless possibilities for software developers and designers. In this episode The .NET Core Podcast, Simon Jackson, a Microsoft Global MVP for mixed reality, discusses the implications of mixed reality and the metaverse for software development.
One critical consideration for developers and designers is accessibility. Jackson stressed the importance of considering accessibility in all aspects of development, including remapping controls for left-handed or one-handed users. There are various types of accessibility concerns, including sight, hearing, touch, taste, and smell, which can be permanent or temporary. Mixed reality presents new challenges for accessibility, such as how to interact with virtual environments for those with physical limitations. Developers are encouraged to consider a wide range of potential users and their needs when designing mixed reality experiences.
To illustrate the potential impact of mixed reality, Jackson shares his work on a project called Project Fizzyo with Microsoft. The project aimed to help children with cystic fibrosis manage their treatments through gamification. The project involved building games that used the actual act of doing the treatment as input controls for controlling the game, such as blowing balloons or firing darts. The success of the project highlights the potential for mixed reality to improve healthcare outcomes and make difficult procedures more accessible and engaging for patients.
The metaverse was a term coined by author Neil Stevenson in Snow Crash, referring to the next generation web that is beyond the current modern-day web built for static web pages, videos, and images. It is a world on data, where the real world is intermixed with digital information about that site. The Hyperverse is a term used to describe the interconnected digital universes that exist, where each location can have several different layers of information based upon it. The emergence of indigenous museums and other similar projects are examples of the metaverse concept, where the real world is intermixed with digital information about that site.
The metaverse is not about crypto and how to buy things or add value to something, but about the interconnective information around subject matters, whether it’s a physical element, a person, or a digital piece of information. The modern-day web browser is not suited for handling the metaverse concept, which is beyond the current level of interconnected data and requires a new level of interconnected data.
Jackson is working on creating a real-life world experience that is augmented in some fashion, allowing visitors to museums to have a more engaging and interactive experience. This includes being able to have conversations with creators of exhibits and networking with friends, as well as being able to take the experience with them on their phones and share it with others.
The integration of AI into virtual and augmented reality experiences is crucial for their success, and developers should focus on incorporating AI into their designs. There are multiple technologies available for building virtual and augmented reality experiences, including Unity, Unreal, Godot, and Game Maker, each with their strengths and weaknesses. The choice of technology used to build virtual and augmented reality experiences depends on factors such as the target audience, the platform, and the developer’s skills and preferences.
In conclusion, mixed reality and the metaverse hold enormous potential for software developers and designers. However, it is critical to consider accessibility, AI integration, and the right technology to achieve success in building virtual and augmented reality experiences. With the right approach, these technologies can facilitate significant positive change across a range of industries, including healthcare, education, and entertainment.
Hello everyone and welcome to THE .NET Core Podcast. An award-winning podcast where we reach into the core of the .NET technology stack and, with the help of the .NET community, present you with the information that you need in order to grok the many moving parts of one of the biggest cross-platform, multi-application frameworks on the planet.
I am your host, Jamie “GaProgMan” Taylor. In this episode, I talked with Simon Jackson about mixed reality, the metaverse, and what they actually mean for software developers and designers. We also discuss some of the most interesting uses of mixed reality from the last few years.
Along the way, we have a discussion on accessibility and what developers and designers should be thinking about when building their applications and experiences.
So let’s sit back, open up a terminal, type in
dotnet new podcast and let the show begin.
So, Simon, thank you ever so much for spending some time with me today, with the listeners today. I genuinely appreciate it when people take time out of their day and you’re joining me before your work day starts, so hopefully we won’t tire you out too much. But thank you very much.
It’s always good fun on a Monday. Monday is a long day.
Early morning Monday recording. Oh, that rhymes. That should be a song.
The title of the podcast.
Absolutely. So would you mind please, perhaps introducing yourself to the listeners so they know a little bit about you, like the kind of work you do, that kind of stuff. Is that all right?
Yeah. Okay. So I’m Simon “Darkside” Jackson, named from an 80s retro game , which was the sequel to Driller . So though that just gives an indication of just how old I am these days. Yes, I’ve been a longtime game developer and mixed reality developer now working in all things here, there and everywhere. Always a very big, keen educator; I’ve always had a passion for basically building things, taking things complicated, breaking them down just so then to show people how to do them and work out step by step and fill in the gaps where documentation always falls short.
I’ve worked in many industries, from healthcare to construction to just about everything, really, except for finance. I save myself from working in finance because I like my sanity too much. I say that I work in medical and sometimes some clinical staff have very unique perspectives on how things should be done, which does test your sanity a bit. But hey, how it is or is. I am a Microsoft Global MVP for mixed reality. I am now a Microsoft Game dev Ambassador, which was formerly the Xbox MVP. Which is an Xbox MVP? And I’m also now an Xbox Community Champion, which was what was formerly called the Xbox MVP group until that met its demise.
Everything seems to this is it 2020 is the year of shrinkage. So I still have my three awards for the moment, it’s up to renewal season. So us MVPs always have the impostor symbol in syndrome of, “have I done enough, am I doing enough? Am I getting out there?” Granted, a lot of my MVP work tends to be behind the shadows and I can never talk. So I have to put people in touch with people who know, people, who can talk to people, who can say, “yes, he’s done things.”
I’ve worked a lot of games, especially with working with the Idea Xbox Group and the Jamie Dev community. You end up working with studios behind the scenes, getting people started, getting them ahead in the program and things like that. Or you work with unannounced titles to help them through the whole application process, which looks similar. But everyone always again, I think everyone in the game dev community suffering imposter syndrome because like, “oh well, I’m only doing this much.” Okay, but what’s your dreams? Where are you going with this? Put that down then and such. That’s why I was in the sort of monetary world, almost a husband, father of four, my eldest has special needs, which is always an interesting challenge, but it also brings with especially to sort of the Xbox and the gaming groups. It’s a keen view on accessibility and things that people need to do to ensure that they have the widest reach possible of who they want to market their game with.
And that’s not to say you have to do everything for every game where it makes sense, where you feel you want to attract that audience, where you want to be more inclusive. That’s where those kind of things come in. So again, it’s all in the education thing. I’ll go, “look at this resource or do this,” and it’s good fun.
I don’t get out as much as much as you did, especially since 2019, but I’m an avid speaker. I’m always out doing events, whether it’s promoting books that I’ve written. Got several Unity books, might be working on some more. I must be excited. I keep doing them two at a time. I did two at the same time last time. I said, “never again.” And yeah, I don’t know why we do this to ourselves.
Keeps us out of trouble.
Keeps me into trouble.
But yeah, that’s me in a nutshell. I’m basically out there helping out everyone I can, working with as many game engines as possible. I always try to pick up a new one every year, saying that I’ve tried to pick up Unreal the last four years running, and there’s always something. So I think it’s like playing with Godot this year instead, until they’re still working on some C# support for some mobile platform thing. Well, that’s my primary developments for God. I can learn it off unreal MonoGame, XNA, C++, the list goes on with anything to anything for anything.
Nice. I was going to say, I mean, obviously you’ve said it already, but I feel like having a child with alternate needs - I’m not sure whether that’s the correct term to use, but I’m going to use that term.
It changes every year, of course.
Right. I tell people that I’m using words with unintentional ignorance, right? I’m not trying to be horrible, but I don’t know the right words.
But yes, having your eldest with those alternate needs, special needs, however we want to say it, whatever the correct term is, I feel like that does indeed set you up with a fantastic viewpoint of the accessibility concerns that people have and that we can bring into our applications. Right. I’m not going to make this into a conversation about accessibility in apps, but I feel like it should be something that it’s on the top of mind for all developers. Regardless of where you are in the stack, regardless of where you are in your career, you should be thinking about accessibility to the point where, like web apps, the accessibility stuff inside of your HTML, web apps; in some countries, that’s a legal requirement now, right, that you have to include. And it’s only just using ARIA tags and then semantic tags inside of your UI so that then screen readers can understand what your app is doing and where things are instead of div-div-div-div- chough react. But I guess the whole thing that I’m getting towards here is I guess that you are in a unique position, I suppose, of being able to actually you sit there and use a piece of software with your eldest child and go, “actually this is hot garbage because we can’t do anything together.” Or, “actually this is brilliant because the designers and the developers have actually thought about what about someone who has these particular needs,” right?
I mean, it’s certainly a keen a lot in all the Xbox groups, whether it’s Game dev, whether it’s the experienced group or doing things you’ve probably seen from a lot of the literature, accessibility is high on the agenda. And that’s not just accessibility, it’s accessibility inclusivity and safety. Ironically, one of the biggest challenges I have, my eldest, because he has no stranger danger, is safety. He will sign up to anything, which is a pain. But also when it comes to gaming and things like that, it’s to ensure the fact that he’s meeting safe people, or at least in a environment which can be considered safe - or safer, because it’s just the internet at the end of the day.
But a lot of the accessibility groups, whether they’re working on sighted resources, whether working audio resources, whether working on physical limitations or abilities of players, it is a huge focus and we’re constantly pummelled with these things. In fact, there is even I’ll give you a link to this later, there’s the xbox ambassadors place where you can actually sign up and become an ambassador. But what you get out of it there is little missions and things to do and a lot of those there’s a whole new accessibility arm to that; and there’s tons of resources and that’s both from the view of the players, “how would I play this thing?” But also what I really like of late is that they’re also having the developer focus. So showing you the information we give to developers to say, “watch this video here’s, this certification requirement you need to be able to do to pass this bar and what you need to do to do it.”
And a lot of those things are in. Whether it’s as simple as remapping controls is an accessibility feature because some people are more struck either the lefties or the righties or the strange, or they have less ability in one hand or the other, or they might only have one hand. And ironically, this even comes up in the mixed reality resources because if you look at the whole lens two and all the hands tracking everyone’s using two hands and they’re like, “oh look, I can pick this up, I can move it and yay,” okay, what if the person’s only got one hand? What if the person has no hands and they still want to interact with your environment. And it’s challenging people with these kind of questions of, “fine, you as an able person doing that, but what if?”
And the last thing you want to do is to cut off potentially a huge part of your thing or in some of the enterprise terms, actually lose a bid because you don’t have access to a certain portion of that company’s employment force who needs to do things. Construction is the most brilliant one is because case of nine times out of ten, the person who’s using the headset won’t have their hands to do with because they’re doing things with those hands, like holding up a beam or hammering something in, but they still need to inform or interact with. So in some ways, accessibility also goes beyond just a person’s limitations. There might be limitations in what they’re doing and those same things apply in just a slight different cadence.
Yeah, there’s a wonderful infographic and this is more for the listeners, obviously - because you’ll know about this - by Microsoft that shows the different types of accessibility . Perhaps you have accessibility concerns with your sight, with your arms, with your ears, with your touch, taste, smell, that kind of thing. And how they can be permanent because of some kind of maybe a disability or an accident or they could be temporary. And one of my favourite ones is like someone holding a baby has one hand, right, because they’re holding a baby. So your app needs to be able to - or maybe does your app you need to be able to ask these questions. Does your app need to be able to cater to the people who are temporarily using one hand, or perhaps they can’t hear properly because maybe they were born deaf, or maybe they have auditory processing problems. Or indeed, what if they are using your app in a loud, noisy environment like a construction site?
And these concerns need to be talked about, at the very least, need to be talked about. And if it is decided that you are a book selling shop and you have a book selling PWA that’s downloadable to your phone and that people who have the ability to use one arm either temporarily or permanently or who are using your app in a loud, noisy environment aren’t really your core customer base, then as long as you can say, “hey, we’re okay with this,” then at least you’ve had that decision. They’ve had that discussion, you’ve had that design moment where you’ve thought, “well, hang on, what if there is someone who is only one handed or hasn’t who was using a screen reader? Do we need to cater to the people who are using screen readers?” And people listening might be thinking, “well, we’re talking to Simon and we’re talking about VR mixed reality and video gamey things so far. Why would I need to cater to a blind user?” Well, guess what? I remember about ten years ago there was someone who posted a long play video on Reddit and they were a fully registered blind person. And they sat through and played the entirety of Legend of Zelda Ocarina of Time on the Nintendo 64. So even blind people can use your apps, right?
I have several good friends who have sight impairments, SightlessKombat , you’ll find him on all the places he is fully blind. He should see his room when he’s on a stream. This wall is littered with just memorabilia and things he’s got. But he plays like Gears of War, The Last of Us. Sometimes he needs an assistive person with him just to help him move around it. But he’s controlling the game or he’s doing the firing using some of the features that the enable in these games for people with those kind of accessibility needs. Granted, some of them do get abused. Aim Assist is the biggest one because the amount of people who do not need Aim assist turn it on for that competitive advantage. No, you play those tools there for people who need them. Not to be, but they do.
But it’s interesting you mentioned about apps. I know of one Microsoft engineer, I believe he still works there, who actually built an AR. He built okay, this is a person, he was completely blind and he’s partially deaf. And the built an AR app to help with navigation around the Microsoft Canvas on a campus. I don’t know if you saw this one, but he has headphones in which have 3D spatial audio, his phone is GEO-location aware. He uses the camera has his vision, so he can hold it up and it’ll tell him what it sees there. And he says, “Right, well, I need to get Microsoft campus room three,” and he will navigate him across roads, avoiding traffic. And these are the kind of things which potentially limit in scope of use because you’re a very focused target audience. But those same capabilities are available to anyone who doesn’t have those impairments, because you still need to be able to I have no idea where I am on Redmon canvas. I need to get to that room and go across. And having the same features but having 3d binaural, audio and all the things means that he as a non-sighted person without a stick, without a dog granted, the still has a dog easily can get around and as partially the non sighted person.
He wrote that app, including all the other solutions and things to be able to train it, to know what to do. And it’s fantastic. And whether it’s games, whether it’s enterprise, whether it’s just a bit of fun, these are considerations to keep in mind of who your target audience is and what you can do to help them. Well, I say that having an age check verification, which is, Please enter your birth date, is not an age verification template. I think we can just all agree on that. That is not true.
Yes, I agree.
Do not base your child safety based on a date that someone can type in.
Yes. Or select from a drop down or something.
If you’re able to do basic maths and you desperately want to access that content, you’ll do the basic maths to access that content, right?
Yeah. I’ll just put you I was born in the year 1990 there.
Absolutely. Oh, my goodness.
You’ve just now reminded me of a wonderful lyric from a song by Jonathan Colton where he sings, “I still can’t believe you could be born in the 90s.”
But there you go. Right? Yes. So we chatted a little bit about accessibility there, which I think is a very important thing. Whilst I’m not the greatest advocate of accessibility, I feel like when I come into a team, or when I’m working alongside a team and I say, “hey, have you thought about blind users, deaf users, users with colour blindness?” Because there are lots of different variants of colour blindness, it’s fantastic to be able to use all the different colours in your app. But what if you are red green colour blind? I’m not sure if that’s a combination you get, but let’s say red green colour blind is a combination you get. Right. What if you use red green colour blind and your okay button is green and your cancel button is red, and they both look monochrome to the person who’s using the app? Right. Well, then right?
Yeah. You get those same things everywhere, and there are tons of resources out there to help you.
Some of the best ones are case of they turn your browser into and they have a scale of showing you what those things like showing you what Dyslexia is like. That is freaky. I have huge respect for people with Dyslexia and how they manage to find ways to core because it’s scary. Same with colour. Yes, colour bias where you need to work with the high contrasts. I actually do a lot with image tracking the mountain kite contrast is a big thing when you want to be able to identify things you want to untrack.
And there’s tools out there that will help you to also experience or see your app or game with these kind of filters on to give you some level of appreciation. It’s never a true appreciation because you don’t live with that condition. You can always turn it off where you can’t. But yeah, there are fantastic amount of tools and resources out there to help people both build for this but also test it personally themselves. But again, nothing really changes the fact that the best way to test is to give it to someone with those needs like Sightless, he’s a site consultant for most games and studios. He works with the Gears of War team to build in their accessibility when they’re doing all the testing. Same with people auditory issues, such issue partial impairments. Yeah, there are people out there fantastic consultants and they’re all really nice people. Considering some of the feedback they get, they’re still really nice people.
Oh, absolutely. Someone who has been out of my life for a long time, I shall say - was born deaf and had a cochlear implant and she showed me a website that attempted to give an approximation of what an a cochlear implant was like. Until that point I just thought I assumed that a cochlear implant give you 90% hearing or something like that. And even then, the way that I describe it to people is it’s like a really badly tuned AM radio, but with loads more static over the top of it. And just the amount of concentration that is required for people with any kind of assistive technology around their eyes and ears - their sight and hearing - is just the amount of concentration it takes to be able to just navigate the world.
Two of my little ones are Dyslexic and until we had their consultations and stuff I didn’t realize that a Dyslexic person has to work twice as hard as everyone else just to be able to read things. Listeners, I challenge you, look around the room you’re in or the space you’re in and count how many things have letters and numbers on them. Now imagine that in order to read one of those you have to work twice as hard. Because perhaps the letters you are perceiving the letters as moving ever so slightly or exploding into fragments, in and out. Or going in and out of focus just because your brain is wired in such a way that you can’t perceive those letters and those markings and scratchings and things correctly.
So what I’ll say as we close off this bit, please, developers, designers, everybody, spend a little bit of time considering other people and the people who will use your applications. And if you want some real numbers, go back a bunch of episodes. I did an episode on Empathy, Sympathy and Compassion where I actually laid out how many people are actually registered blind in the UK and the US, and how much of a percentage of the population that is, just to give you real numbers, so you can turn to your boss and go, “actually, we need to have a chat about this.”
Actually, one of my proudest engagements I had with Microsoft was actually one called Project Fizzyo , and slightly different, it was from the same kind of angle, but it was for helping children with cystic fibrosis. A hugely debilitating disease. And it is, but it can be managed with treatments. But trying to get kids to do a long gruelling procedure repeatedly every single day, is hard. So Project Fizzyo was work with Great Ormond Street and with Microsoft to build games to gamify the actual treatment, which was fantastic. And we had some awesome results coming out of that, where we basically built games. And basically the actual act of doing the procedure were the input controls for actually controlling the game, whether it was blowing balloons or firing darts across the room. And, yeah, the actual engagement itself was just fantastic. That was one of my sort of proudest things to be involved with. And building tools and frameworks to enable developers to then go and build things for these things even more challenging. So you have the one touch challenge and things like that, but from a design perspective, you had two inputs. One was a breath, which had to follow a certain rhythm, and another was one button.
Now build a game, make control, that can do a lot of fantastical things with just those in two inputs. Try a bit of extra consideration. The breathing had to be rhythmic, it had to follow a certain pattern. So it’s because of blow hard stop, blow hard stop, but not too hard, not too soft. And those are the little things. It was a nice, interesting challenge to try and get people into it. So, yeah, that was one of my biggest projects out there.
And, yeah, I think it’s an incredibly important thing and I feel like the VR/XR, multiple different ways of alternating reality to bring people in can perhaps really help with that. And so I guess that’s how we pivot into our main discussion for the day. So I do have written down in my notes, I don’t know if the episode will be titled this, but I’ve got “Unity, Metaverse, .NET, VR, XR,” right? Those are the words I’ve written down because they’re all different things, right? So I guess before we talk about Unity and all the different tools and XR and VR and all that kind of stuff, I wonder could you perhaps give the listeners a bit of a description of what each of those things are? So the Metaverse, XR, VR, AR all those kinds of things, right? Because it’s just alphabet soup, right?
Yes, I’ll leave the full one to the end because that’s not an interesting one.
But when we’re talking about mixed reality, we’re talking about two elements realistically of you have virtual reality which is where your vision is completely occluded. You’re in a dark room and all the 3D content being presented to you is just what is generated. It’s the only thing you can see. You have the augmented view where basically you’re non occluded. You can see the real world and 3D content usually either is presented in it, which is what you see with mobile AR experiences, or they’re actually interacting with world elements, which is where you get more intelligent. You have the headsets like the HoloLense of the magic and things which understand the world and can actually interact with it. These are not two discrete things because it is a whole spectrum. Because content or representation can be in the virtual, it can be have some of the real world in it and have 3D content in it. It can be fully 3D in the real world and they can have the true this is your world being absolutely enhanced. So doors being recognized, interacting with elements.
The best thing I’ve seen for this is a game called Tahoma, available in most of the stores where there was someone actually who’s actually deaf but has these these attachments, we can put attachments and then the world becomes augmented where they interact with doors through the visual way of just open door, things like that. They can replay conversations of people who have been in that space as if they were there at the time, obviously can’t talk to them because it’s the past, younger damage union. So that’s the whole augmented virtual world, as it were.
Where the it gets into the soup. So there’s this lovely term called the metaverse and it’s a really bad overused term. Neil Stevenson coined it back in Snow Crash . Ironically, as a virtual world, multiple people can join in, but in the modern world it’s become the case of the next generation web, as it were, which is hilarious because it’s not nothing new, but another way. People are bought in also gets mixed up because the people call it Web 3.0, which is another very broken term because they get mixed up with crypto and things like that and it’s none of that realistically. Where it fundamentally comes from is that the current modern day web was built for static web pages, videos and images and some text. But that is ultimately just what your web browser sees and how it interprets; and your web browser is just an app like everything else. There’s a whole argument about, “oh, I want it on the web,” okay? But it’s not a native app. It’s just in a web browser which is a native app. Which was fun discussion to have, but your modern web browser has decades of old technical debt of things are in there trying to work with stuff that’s from way back in the when we had the first iterations of how the web came to be. And all of our code still exists on modern day web browser. And it has to sort of pre filter through. So websites from 20 or 30 years ago still will work in your modern browsers thanks to all the backdated things but it’s a whole huge amount of technical debt. So when people the start to talk about web 3.0 or the metaverse, what they’re talking about - sort of the next generation of browser and what that can mean. And the way I converse it when it comes to these kind of subjects is that the metaverse ultimately is the world on data. So beyond the web, the web browse which just like shows you just a view in the world it’s the fact that you can take that world with you whether it’s augmented, whether it’s virtual, it is both not one or the other. Some will say it’s only augmented, so you have to be there. And some will say it’s only virtual, so I can do it. Whatever. It’s a mix.
And ultimately it comes down to you now have instead of just this digital web page presentation, you’ve now got the real world representation augmented with digital information about that site, whether I’m looking at a statue and then the can see all the history of that statue, who created it, and it’s linking all these disparate information together. Some of it might be visual, some of it might be text, but it’s still the metaverse. The metaverse is not VR and it’s not AR. It’s somewhere in between the real world intermixed with data.
Now when it gets more interesting, where some of the standard groups we work with there’s also this thing called the hyperverse. We’re going to verse all the things - I want to watch an episode of Firefly where we got the ‘verse and that ultimately all these verses. We’re all verses now the hyperverse where realistically what you’re doing is that you’re taking all this content around the world and it’s case of, “fine that’s this world well, this is a digital age. So the fact that I can now actually have multiple of these verses the multiverse or the hyper,” there’s too many terms, right? It gets really bad but ultimately you have to break the compliance because when you’re talking about a digital universe, whether it’s based upon the Earth, each place or location can have several different layers of information based upon it. So I might go to, say, Times Square in New York, and I can see all the advertisements and all augmentified, so I can see what I’m going to do. You might now have a travel digital verse on top of metaverse on top of this, so I can see all the travel information about navigation, where to go to, how to get places.
On top of that, the I’m going to have the entertainment verse where I’m actually now seeing actual concerts. Elm, Times Square, I think storms in some older ones have done some recently where they take over the square and basically somebody comes right you’re in a concert venue, same physical place, multiple different digital layers applying to that, but also the same case for those people who are not there. So am I still in that digital representation if I’m actually in the UK watching a video concert in New York, being as either a digital avatar or just being like a camera in the scene?
So when we talk about the metaverse realistically, it’s just a New Age way of digital representation, of all the information we have in the world, interconnected in different ways. And the key thing is not crypto and how I buy things and how do I add value to something. It’s about the interconnecting of information around subject matters, whether it’s a physical element, whether it’s a person, whether it’s an actual digital piece of information together, trying to marchanize all that information together. And this is where when people talk Web 3.0 is the fact that it’s a case of the modern day web browser is not suited for handling that. Yes, we got APIs, yes, we can get calls and things there, but the whole metaverse concept is beyond that. It’s like the next level of interconnected data and ultimately the old web browser needs to die and there needs to be something in its place. Unfortunately, you’ve got Meta Horizons trying to build this virtual world. Great, fine, but that’s a VR only concept and there’s no data in there. It’s just go to a virtual restoration world. You’re the likes of Niantic who are trying to build AR billboards and information in space. So you have to go to the space to see these things and the two factions like warring off.
But we are seeing now emerging of things are where things like in digital museums. So a lot of the big museums in America sort of keyed onto this. There’s a few others in other places. I’m working on another where you have a real life world experience. It’s augmented in some fashion. So applying the whole metaverse concept. So I’m not just going to a museum to see this thing on a shelf and there’s little posters next to it. I can look at that, I can get information about it. I can actually have the creator of that thing walk up to me and have a conversation and telling me about this thing. If I’ve got friends there. We can then network relate, but also then allowing that I can A) take that with me because it’s on my phone now, I can go away. I paid to see this experience. I’ve got my phone. I can show other people. People who are not the museum can still come and visit. They might not get the full in person experience because you still want people to come to the door. You don’t just want this digital revenue. But adding to that as well, and you’re making it almost a network solution. So whether you’re in person, whether you’re remote, you’re all still people going to and experiencing this content and be able to take it away and then be able to share it with friends, and then friends can share it in another vein to their fashion.
I just need to stop the conversation every time, “well, can we put it in a web?” I know it’s an app. You have to install the phone. You have to install Chrome granted mic on pre install. You have to install Chrome on your phone. It’s just an app, old app versus new app. But there are standards forums and things which are trying to make things more universal so you can have a potential next generation universal browser. But most of the current browser developers around that. So Mozilla stopped work on extending Firefox to make VR and they’re now got the Firefox VR thing, which still suddenly brief standards mode which will be separate and people net confused about switching one to the other. It just doesn’t help so much when people keep kind of shoehorn VR or XR into a current browser tech because you’re making so many trade offs to make it work. It sort of dilutes the experience and it still frustrates me today.
We have a standard called WebXR and the XR part of it is, “I can show a camera.” There’s no world understanding, there’s no case. You can build an app in unity or unreal and you can use like ARCore kit on Apple, ARCore and Android AR kit on Apple. And you can then see it, can see the world, it can then map it out and you can put content on it, you can have balls rolling off your table. I’ve got 1001 balls rolling off the table. Demos. You seen all the lovely AI game generation things. Oh, place this here, place this place. Most of it’s fake, but same concept. Well, look and drop things and it falls off my table because of something else. But there are draft specifications for doing these things. But it’s not there yet. But again, it still comes back to we’re trying to shoehorn things into an old browser that wasn’t built to do this, putting more and more weight onto it. It needs to clean break at some point, but change is hard. Hopefully it rambles on forever on that one.
As you can see, it’s a very woody concept and I’ve been on presentations or talks, going about it, and in between the people who do these things and working, there’s a lot of dissension, especially those who like crypto, building artificial scarcity on something which is not scarce.
A Request To You All
If you’re enjoying this show, would you mind sharing it with a colleague? Check your podcatcher for a link to show notes, which has an embedded player within it and a transcription and all that stuff, and share that link with them. I’d really appreciate it if you could indeed share the show.
But if you’d like other ways to support it, you could:
- Leave a rating or review on your podcatcher of choice
- Head over to dotnetcore.show/review for ways to do that
- Consider buying the show a coffee
- The BuyMeACoffee link is available on each episode’s show notes page
- This is a one-off financial support option
- Become a patron
- This is a monthly subscription-based financial support option
- And a link to that is included on each episode’s show notes page as well
I would love it if you would share the show with a friend or colleague or leave a rating or review . The other options are completely up to you, and are not required at all to continue enjoying the show.
Anyway, let’s get back to it.
I can imagine it’s a bit like that Xkcd cartoon of, “there are nine competing standards, let’s make a standard to unify them all. There are now eleven competing standards.”
Okay. So, yeah, I like the idea that there’s going to be hopefully a clean break, that there is a separate web browser for or app unifying app for all of these mixed reality things. Because trying to shoehorn it all into one app is just going to make like your web browser will slowly become a 1GB binary file with all of its dependencies, right? And that’s no good.
We all know that. Who was it? It was Abraham Maslow who said, “when all you have is a hammer, everything’s a nail,” right? And if you’re trying to shoehorn everything into the web browser, then the web browser becomes that hammer. And really, you don’t want your app experience to be shoehorned into. Like you said, if you want your experience to be all encompassing and fantastic and to be the one thing that drives people to your product, then you need to be using something that is different to what we’re all using for what is essentially forms over data, right? Our standard web browsing experience is load a form - it doesn’t look like a form because it’s been styled to not look like a form, but load a form, show me some data, I push the button, it sends the data back, and then I push another button, it brings me some more data. Right? You don’t want that. That’s not going to be fun for like you said, for the people who are going to museums and stuff. “Oh, wow. I can scan the QR code and it takes me to a website that tells me the information that’s in front of me.” That’s not engaging, that’s not mixed reality or anything like that. So I fully appreciate where you’re coming from.
Yeah, I mean, ultimately it’s case of there is never going to be one metaverse as much as the ideal, is there? We don’t live in a digital economy world where that’s scalable because every business wants their way to make money. And granted, yes, they should still be able to make money. Where the standards really need to come in is the case of, “I should be able to go from Horizon World to Mozilla Hubs. Seamlessly click of a button. I just transfer my singular digital identity. Who is me? There’s only one of me. That’s it.” As far as I’m aware, my virtual evil clone has not invaded a single platform for having digital identity which is decentralized. So you see this word decentralized a lot, where no one company owns you or your data. So the whole decentralized part is case of the risk, just me. Granted, there are bad decentralization. So Mastodon, which was seen as the Hail Mary to Twitter, when Twitter guys, everyone go to Mastodon and then you find that, well, one server is able to then disable your account and you can never sign up to any other service. It’s like that’s not decentralized.
You have a decentralized part of your identity and then you as an individual, whether it’s your digital representation or you, it’s still you. I mean, I could have many digital representations. There’s still only one physical me. And that identity should be able to take what I own, where I own it, from place to place. And the one misnomer I will break down is that just because I buy a hat in Fortnite does not mean I have to be able to wear that hat in any other app. That’s down to the developers who build those things. So, yes, I own the hat, but it’s only in there if they build something which then can be shared. Yes. And another platform supports whatever that content is. I can taste. Yes. I can take places. Take places doesn’t change the fact that I own it. I own it in Fortnite. Great. It does not mean I have to every other app? It’s not the Metaverse if every other app can’t have that hat as well, that’s a little bit much.
Ultimately, my digital identity have to seamlessly travel from these different representations, whether that’s in an I could be in an augmented fashion, open up a portal, put on a headset, and then I’m in Fortnite in that location. There’s a load of location based things out there now still RIP the Void. But the Void, they had secrets to the Empire. They had a big contract with Disney until everything went away. Where they actually had it was amazing. They had true Virtual 4D presence. So you put on these headsets, you’re in a physical space. All the virtual content was lined up with the real world. So you walk up, you push a wall, there’s a digital wall in the headset, there’s a physical wall where you press buttons. They had heat sources where you were walking past a fire. They had mist where you’re going through water. If there were all the players, you could reach out and actually touch them because everything was tracked and it was amazing.
There are new ones that have spun up in its place and those are equally fascinating. And they always try to keep up. All the headsets keep changing or the capillaries keep changing. My favourite big thing, which is still hard to get right, is hand tracking. Simply be able to put a headset on and there’s my hands. Yes, you don’t have buttons, so that’s a hard part for game developers because we like buttons, we like to be able to press and do things. If I’ve got just my hands, it’s a bit harder. But you can work around that. There’s always ways to do that. But by having all these different technologies available, but so I should be able to go from experience to experience to experience. Doesn’t matter whose app it is. Yes, I might need to download. At the end of day, you’re always going to have to download content. If you’re on a web browser, you’re still downloading content, might download to their server, but you’re still downloading it. Someone’s bandwidth is being used, but you get more performance when it’s on your device. At the end of the day, if it’s on a web browser, on a big server, you’re still beholden to that web server or wherever it is, having enough power to run all the things you’re going to do. And if you have a big peak, it’s going to slow down, can’t slow down in your device unless you’ve also running many different background apps.
So the content always needs to get to your device. So I should be able to say, “right, well, I’m going to go to horizon to depth now and then I’m going to put my phone down, I’m going to seamlessly walk over to this experience which has links to Fortnite and things like that, and I can see all these things. I then can stand back and I’ve gone into a mall, unless there’s this lovely augmented booth where I can have a talk with someone.” Always leading to the world of AI and how all those things are working.
It’s fun how all this new AI modalities will impact on the virtual world. We’re not to see, sadly, I think we’re seeing people are going, “oh, we were revealed about VR, now we’re going to step back. No. We’re all about AI. In this case.” There’s not a case of adding another technology to the stack. It’s almost a transition because it’s going in the favour. But all of these things are just more technology for the stack. In its case of all those then build up to build the ultimate solution of then how you handle someone in a physical space, how you then provide the digital representation either augmented or included on that content, how you then bring all the information around that and then network with other people around you. But ultimately then saying, right, “well, I’ve done this here.”
It’s like an arcade. I’ve played in one archive machine, I want to play in another arcade machine, but I still want to be me, not Rambo.
So we’ve talked a little bit there about the theory of how it all sort of fits together. What about some of the technologies then? Because I remember I first started my communications with you relating to Unity. And I know that Unity has a lot of support for headsets and tracking and all that kind of stuff. And I know that Unity does a little bit of stuff like they have like a C#-like language that you can use to do stuff. But is that the only way you can do it? If I’m a net dev, right, and I want to do something really cool and I’ve been told by the boss, “metaverse, mixed reality, make it work, put it in a browser!” right? I then tell the boss, “I can’t put it in a browser, spend $1,000 to get a headset so that I can actually build it.” What am I building it with?
So if you’re doing purely C#, Unity is a good one because it’s got a big inline editor and you can do all these fantastic things with it. You also have visual scripting in there, but almost as a C# coder. I still find visual scripting alien to me node connecting code to get logic. Although I know the kids do all the Scratch and all the building block things now. That’s how they learned. I know my youngest is in high school and he’s doing all these different things. They’re looking to go.
Then you got the likes of Unreal, which bit more a unwieldy beast that you can either do in what they call Blueprints, which is their visual scripting solution, which is quite powerful, actually, or you can go back to C++, although I have done C++ in the past. I’ve put it in that shoe box in the cupboard marked, “do not open ever again, if I can help it.”
Yes, you got the likes of Godot , which has its own scripting language called GD Script, but it also has C# support, although the C# support is as they’ve moved into version four, it’s growing, but still behind where their own scripting language is. And you got likes of Game Maker, have got their own little scripting system. If you want to go pure C#, you got the likes of MonoGame built off the foundations of Microsoft’s XNA , which was a fantastic way of going through almost how I got into game development. And then just as it was reaching popularity, it got stopped, it lived on. And that’s all multiplatform.
And it comes down to what you use will largely depend who you’re targeting and what you can do. So it’s a mixture of what skills you have, what you’re willing to learn and what you’re targeting. So Unity is by far the easiest when it comes to mobile because of it. It’s not a low_ER_ footprint, it’s a lower footprint. Unreal is the mobile support is less, but you can do things. But the issues require a higher spec device to be able to actually run it. And again, it depends on who your target customers are and what the device is going to have. Godot has only just added VR support and they’re trying to build an AR support. Be interesting.
But forgetting all those, you can just go it alone. So there are web based languages for building VR and AR content in web, VR and WebXR, as we discussed earlier. And those do work and they are fantastic when they work and make sure they’re going. Obviously they require constant connectivity to function and if you’re going to like an exhibit hall or you’re going to South Africa, connectivity is a bit limited. It’s amazing how those two things are very similar because if you go into a web conference, there’s too much WiFi, so you can’t get anything and you go to South Africa, there is no WiFi, you can’t get anything and there’s no in between.
So again, it comes down to also you have to think about where you’re going to deploy, where this thing is going to run. I used to say in the London Underground, because you be stuffed because there’s no connections, but it’s all WiFi down there now, I’m told. Don’t know how good it is. WiFi is getting everywhere, Elon Musk is deploying Starlink everywhere, getting more things everywhere. People would then say, we need more web. No, what you build the way you build it would largely depend on all those different factors who your customer is, what the platform is, what you’re willing to learn, what tools that you need to use to build those things.
But from a learning perspective, they’re all pretty much the same. The hardest part and generally when I’m either teaching or doing a conference, this the hardest part in this digital age is the fact that humans at a base level don’t understand digital representation properly. So we have a lot of things about how we build physical things. When it comes to a digital thing, either in the real world or in the virtual world, there’s a lot - ironically - there’s a lot more to it because you need an emotional context to whatever it is you’re building. You need to understand its digital interactions with maybe the physical word or the person and how they’re going to interact with it. Because it’s not something I can just there’s my mouse. I can pick it up, I can put it down. Great.
Here’s this virtual thing. Well, I take this thing, it’s going to envelop my entire realm and make a gun. And it’s case of then that gun then starts talk to me - if anyone’s played High on Life and other such things. And everyone’s at the same stage of that, of understanding how digital representation works as it pertains to us as humans, because it’s not physical. We’re used to physical. We’re not used to digital. And it’s a whole learning process of how you do these things. So whether or not it’s a case of I’m throwing a grappling hook to a virtual ladder and then it’s pulling me over to it, how are you going to do in that way that’s A) not going to make the person throw up is comfortable, works well with the environment that you’re in. So do I have 2 meters of space to walk around with, or am I sat on a chair? And all these different things weave together how you build these experiences. And that’s always the biggest challenge. But the challenge still comes back to who’s your target audience, what they’re going to be using it on, how they’re going to use it, whether it’s had one, whether it’s a mobile device, whether it’s some awesome augmented canvases. So you’re actually projecting digital stuff in 3D. I’m seeing a lot of these things now in the likes of Times Square keep going back to Times Square, keep people showcasing things there, and they have a digital display of like a spaceman who’s actually reaching out to the post and is not it’s just a faux way of doing 3D, but it looks that way. And it was interesting the fact that some of the first experience people saw that they ran it was like, “what’s going on here?” They didn’t understand.
Again, this comes down to how you as a human, connect to these digital things in the world. And as designers, as developers, we need to be able to understand that to best relate to the actual end user, that they’re going to have a good experience or as my favourite thing of the first thing I showed my kids in VR was a zombie horror game for my good friends who was just building it for the Windows mixed related headsets. And the range of experiences from screaming out loud was hilarious.
Yeah, related to that and related to some of the stuff you’ve said, I know my friend James, he has VR headset. I’m not sure which one, but he has one of them. And he plays Arizona Sunshine , which is a zombie shooty survival game. And at the end he says, “at the end of the game, you got to reach up and grab onto the bottom of this helicopter that’s going to lift you out and save you.” And he says even now, even though he’s put hundreds of hours into this game, every single time he grabs hold of that wrong, his body tenses because he’s going to be lifted off the floor. But it doesn’t happen. He just sort of stands there with his hand in the air. Right. I suppose that’s one of the less immersion breaking sort of things that could happen. Like you were saying, right. If you want to fire a grappling hook and then get pinged across the room, how do we do that? In a safe way. Right. And like you said, how do we do that so that then the person doesn’t feel motion sickness and things like that. So yeah, I think you’re right.
I think a lot of it has to do with how we design these experiences. Yes, we can do it this way, but do you have a massive space to devote to one person being able to walk around? And how expensive is that? If it’s not expensive and you have that space, fantastic. You could do anything.
Actually, one of the interesting thing is that there was some research done - it’s actually a few years old now, but I think it’s evolved a bit more since then. And what they did is they intermixed eye tracking, which is becoming more and more popular. Some that we had one headsets these days. Fair warning to when you’re using a headset with eye tracking on it, you will take time to learn it because your eyes will have to start flexing in a way that you didn’t expect. It’s like a muscle, like anything else. A muscle you start using for the first time is going to hurt. So a lot of people gets put off is slow iterative things.
So eye tracking is great, but what they did was they use eye tracking to monitor how the person’s walking. And what’s interesting to know is the fact that actually what’s going on with your home body is your eye is never just looking forward. Your eye is actually looking everywhere all at once and all the place. And they abuse that system because in the intermix between your eye going over here and right over to here, they shift slightly, which causes you as your body to actually physically turn. You’re not aware of it as far as you’re concerned, you’re walking straight. And what they built is a system where you could actually keep walking straight as far as you were concerned, in a two meter by two meter space for miles, you were unaware of what this was going on. You were simply walking and walking and you could just keep going. But if you watch the camera of the person in the room, they go like this.
And they even trialed this with multiple people. So the system actually helped to avoid two people bumping into each other in the same space. I don’t know how many they got up to. It’d be interesting. I can imagine it took a bit aware on the eye on things, but it’s something you could learn to use. I suspect it’s something we’ll see more of in the future as as the research and things going on. But it’s case of there are things like motion sickness and things like that. There are things we can do digitally to fool the brain, to thinking something else. And that’s where a lot of the advancements are coming in. So the ways we counteract things like motion sickness is that you either reduce a person’s field of view to reduce the amount of input their brain is going to handle it. Or one of the fantastic ones in the case of snow, if your eyes something else to look at while you’re moving, it’s distracted. So your brain is not like, “oh, well, my body’s not moving. It’s like, look at the pretty snow, what’s going on there?” and again, you’re not particularly aware, but that’s what your brain’s doing is going, oh, look at that. That’s interesting.
Another thing you can do and it was amazing when I first experienced this one the best ways and I don’t know of anyone who goes motion sick from this. I can run in VR as long as my arms are moving. And if your movement is controlled on your arms, your brain goes, “this is perfectly fine.” I forget the name of that. There’s a great climbing game where you have to climb cubes to get things and yeah, you have to run and then you have to start climbing and your brain’s focus on my hands, “what my hands doing? Here we go. There.” It’s completely forgot the fact that, yeah, you’ve not thrown up. You’ve actually just run full path across this room even though you were stood still. Because that’s always the first thing that trip the brain up is that as soon as you see something that your brain goes, “oh, I’m not doing it.” It goes “urrrgh.”
Some people are better and the others whichever the experience is. But it’s knowing those tips and. Techniques of how you do these things again, but not every single tool will work in every single people. So it’s just why, you see, especially in VR tiles now, you’re a range of teleport options, whether it’s teleport, which is my preference, which you can’t run with, which I have to give up running to be able to teleport, or the have vigonettes, or the gloss over something, turn it black and white, something to basically tell the brain, “othing to see here. It’s fine. We’re just going to go and do this back in the room.”
Right, I see.
Okay. So then what I’ll do is I’ll have a chat with you outside of the recording to get a bunch of links for interested people to check out. We’ve got a bunch in the document that we filled in together, like Red6 AR , which allows pilots to learn to fly all of these Top Gun style planes, but without leaving the safety of a safe environment, which is pretty cool.
Red6 is actually in aircraft. So a pilot goes one pilot goes up in his aircraft and that’s all he needs then, because all the everything else has been simulated. So he’s actually got the actual plane. He’s got all of Gforce and everthing else. The headset has been tuned to work at these supersonic speeds which is getting that tracking working when the headset knows it’s going this fast and if there’s and it simulates battle environments that he can so give us some dogfights without having to send up another billion dollar plane which might go wrong and crash and cost other things. So, yeah, no pilots in Red6. The pilot is actually flying, which is just amazing.
That’s mad. Can you imagine that the first pilot to be told, “yeah, we’re going to put this headset on you so you don’t see the real world. Good luck. Also, you going at Mach-2, too.”
Yeah. That is an automated system so they can see the real world, but the headset can replace it because it’s that bright. It can actually overlay everything. That’s no moon, that’s the space station.
Okay, so, yeah, I’ll get some links from you outside of this recording. What’s the best way for people to sort of reach out to you? Like they’ve listened to this and gone, “Simon, that’s amazing. I’d like to learn a little bit more. Can you point me in the right direction?” How can folks reach out to you? Because we connected over LinkedIn. But is it LinkedIn? Is it Twitter? Is it you said Mastadon earlier on, which seems like a bit of a dumpster fire for me, but there you go.
I did not let people go. I sat on the fence and said, I’ll wait for everyone to move and then I’ll go, no. I am basically Simon Darkside J pretty much everywhere. I’m on GitHub , on LinkedIn , on Twitter . I am on Facebook. I don’t use Facebook. It’s there as a marketing tool. If I’ve got assets and things, books and things go on Facebook. There we go. But yeah, if you search up Simon Darkside J, you’ll find me. If you actually search Simon Jackson, you interestingly, get a psychologist in America. That’s not me. I think I’m third ranked for Simon Jackson’s on LinkedIn, but there we go.
And you said earlier on you’ve got a number of books out as well. So obviously people can hit up their favourite bookseller and find your books, right?
Yeah. There’s Mastering Unity 2D Game Development , which I’m currently working on a third edition for to try and modernize it for Unity 2023 and all the modern things. There’s Unity3D UI Essentials , although I think that’s off sale now. But that was great because it was a reference utility for building with the Unity UI system as it was.
And I’m also now working on a new book, ironically my next least favourite thing, which is automation. So automating building your … basically when it comes to Unity, you spend half your life waiting for Unity to load or Unity to build or do things. So why do it yourself? Give it to a computer, let it do all that things and you can go make more coffee and actually get on with doing life. So automation handles all that. That one’s with Apress, the other one’s with Packt if you want to learn something nice about it.
Excellent. Well, what I’ll say, Simon, is thank you ever so much for spending this time with me. I know you’ve got to at some point in the next couple of hours, rush off and go to work. So I apologize for using up all of your time this morning very much.
Yeah, my rushing to work involves rushing to the coffee machine, rushing back to bed afterward.
There you go. And I assume the coffee will help you rush back.
Excellent. Well, like I said, thank you ever so much for being on the show. I really appreciate it. I learn a whole bunch of stuff and I’m sure that everyone listening is going to learn a whole bunch of stuff. So thanks for that.
Yes, always happy to help.
That was my interview with Simon Jackson. Be sure to check out the show notes for a bunch of links to some of the stuff that we covered, and full transcription of the interview. The show notes, as always, can be found at dotnetcore.show , and there will be a link directly to them in your podcatcher.
And don’t forget to spread the word, leave a rating or review on your podcatcher of choice - head over to dotnetcore.show/review for ways to do that - reach out via out contact page , and to come back next time for more .NET goodness.
I will see you again real soon. See you later folks.
- The origins of Simon’s nickname
- Microsoft Inclusive Design Principles
- Empathy, Sympathy and Compassion
- Project Fizzyo
- Snow Crash
- “Standards” by XKCD
- Scratch programming language
- Microsoft’s XNA
- High on Life
- Arizona Sunshine
- Red6 AR
- Ways to contact Simon
- Simon’s books (at the date of recording)