The metaverse is more than just a space to socialize; it’s a representation of real life. Understanding this helps us better understand augmented and virtual realities and where they are heading in the future. Today’s guest, Dmitrijs Jurins, captures this essence, focusing on Digital Twin implementation for connected devices and Metaverse data representation. He is a modern business leader, visionary systems thinker, and technical management advisor, who is passionate about the full-cycle IoT and Edge hardware product creation as well as SaaS and Enterprise system design and development. In this episode, Dmitrijs talks about how they are providing true operational insight of built environments digitally with the use of digital twin technology. He explains to us how they work and relate to the metaverse and what applications they are working on to transform companies. Plus, Dmitrijs shares the exciting things happening in the space, what it says to the economy of scale in hardware, and what holds us back from technological advances. The metaverse goes beyond the stigma attached to it; it offers us a tool to better our lives. Join this conversation and learn how it can help companies move forward into the digital future.

Watch the episode here

 

Listen to the podcast here

 

The Future Of Augmented Reality And Virtual Reality With Dmitrijs Jurins

In this episode, we have Dmitrijs Jurins. He is a technologist based out of the UK, specifically in Manchester. He has a long history of working for different companies. Right now, specifically, he’s working in the metaverse and augmented reality. Dmitrijs, welcome. Thank you for joining us. If you could tell us a little bit about yourself before we delve into the metaverse and augmented reality, which is something I want to talk to you about. Tell us what you’re doing right now and your background.

It’s nice to meet you, Doctor Awesome. Thanks for having me. It’s a true pleasure. I’m looking forward to this session. I’m working in a smart buildings company at the moment. I’m working with mission-critical customers in ensuring they get the maximum out of their built environment. My mission there is to work with the building owners and operators to make sure that their buildings are built correctly, first of all.

I’m engaged with the design stage of the building because they’re building owners and operators. I should elaborate on this term probably. With these companies, everything starts from the idea like, “We need another building.” Either the company is growing and they need another office space or it’s a data center company and they need another cluster of data centers to be built. It’s this caliber of company and they’re interested in ensuring that they build it rightly. As it’s built by third parties, there are tenders. There are preferred contractors, but as they build it, they collect all the information about what the building is comprised of and how to run it later.

They’re following some six-stage framework, which I’ll not dive into but at the end of the day when the construction is over, digital handover happens. It’s a specific term from the construction industry where many years ago, a truck full of cardboard boxes would arrive at the building owner’s office. Some would unload them and say, “That’s all you need to know about your building that we completed for you.”

There are all the operations manuals, all the insurance, all the procurement documents, and everything. These days, thanks to the company I work for, we ensure that it’s sophisticated, well-crafted, and reviewed three times in the process of delivering that information digitally. That’s where we start working with the building owner to make sure they operate the building well because the digital handover is all nice and good, but it’s up to date only until the moment it’s handed over. It then becomes outdated.

 

FSP 10 | Future Of Augmented Reality

 

Our true believers and advocates and myself, I’m a big visionary of the digital twin technology. From the design phase of the construction, we create a digital twin or replica of this building. We work with design agencies and architects to make sure that they design the building with digital twinning in mind. We put the customer on the pathway to twin from the get-go so any building management systems or energy distribution systems or indoor air quality sensors are installed and provisioned at the design phase.

The general contractor forms based on the bill of material and at the end, you have a connected building that sends the data to the cloud. That’s where the digital twin comes in extremely handy. We treat the digital twin not only as a gimmick for marketing or even a facilities management team just to see the building in 3D.

It’s a bit more like a database. It’s because we build the process of converting the architectural drawing into a graph database where we start building relationships between spaces and assets. For example, one asset could be a door and you could build a graph relationship between space 1, space 2, and the door that’s in the middle. In between them, where I’m alluding to is we can start doing thermodynamics.

We can do liquid dynamic calculations so we can see what the airflow would be. A practical application for that would be for one of our mission-critical customers, hospital infectious wards, for example. You need to keep the pressure in the patient ward lower than in the corridor so that when the door opens, the air goes into the infected area rather than outside of the infected area.

Monitoring these things is possible and there are systems doing that for a good number of years, but when you want to get a true insight into the operational insight of your built environment, you do need to start collating data from various systems. You start searching for correlations, you start assessing, and you can start doing modeling. Having just the data is not enough. We realized that a digital twin has the database overly simplifying that it is a relational, very complicated graph data space but it’s still about relationships. That helps us to put the data we collect in the context.

A digital twin has the database. It is really a relational, very complicated graph database, but it’s still about relationships that really help us put the data we collect in context.

When I think about what you do, I think about all of the near-future science fiction where someone presses a button and a digital schematic of the building comes up and they’re like, “This is the path that we need to go to.” If it’s like a heist movie, they’re like, “We got to get past this vault and this vault has this password on it.” It’s something that we intuitively think about that this is available, but it’s not.

There’s only a handful of companies that are doing something like that. You mentioned the hospital. I work in a hospital and it’s completely disjointed. You have somebody who takes care of the air conditioning, somebody who takes care of the plumbing, somebody who does this, and somebody who does that. There’s no overall sense of, “We have a temperature of 98 degrees in this ward. What happened? Is the air conditioning off?”

There’s no main control center like you would see and if there is a main control center, it’s something from the 1960s. It’s not something that is updated. That’s cool that you guys are doing that. I think it lends itself very well to us talking about augmented reality and virtual reality because if you can create such a model that has an exact representation and then those sensors that are in the building are represented in the digital twin, then it’s like you’re being there. That pressing a button and the digital model comes up is something that’s available to us now.

That’s exactly our line of thinking. We were following exactly this path of thinking and we realized, “Hold on a second. We have a full digital replica of the built environment with all the data, both static, and by static I mean the actual model of the building, all the documents attached to that, and also live data from all the sensors and etc. This pressing a button is a good anchoring point here because we realized that why do you need to press the button? Why don’t we gear up the boots on the ground and the staff doing the operations?

It could be anything. It could be the facilities management. It could be the mission-critical staff engineers, doctors, or anyone with that information in the augmented reality space. We performed the hardware assessment and realized that the devices that we have access to at the moment give us good spatial awareness. They are good at scanning the premises and understanding where they are. It was the last straw that we needed.

We identify where you are. We’ll put your avatar straight in the twin. We know which room you’re in. We know your role because you’re logged into the device and based on your role, if we know the job you are on, it’s great stuff. Let’s say you are an electrician performing diagnostics of the energy system that’s located in Plant Room B65. It’s great stuff that automatically enables us to do the proper wayfinding.

Let’s say you are a contractor who came on-site and is not familiar with all the ins and outs, wayfinding access control, and access to all the documentation. Even with all these tools struggle, we can set up a remote session with the engineer from let’s say, Schneider, who is the supplier of that component that you are servicing, or Siemens to advise you from the comfort of their house without traveling to the airport, flying to your country, traveling to the site, and then bringing all the tools and other spare items that took some carbon to produce.

It’s a phenomenal amount of waste and CO2 reduction and general cost optimization and fault reduction. Now, there are a lot of caveats and pretty much everything I named required a specific integration with certain things like let’s say access control innate. We do need to integrate with that access control system. We are finding that we can do calling the Schneider guy and we do need to set up certain arrangements, but all being well and all things put together, that’s the future or how we see the future. That’s exactly what we’re aiming to deliver with our solution in the metaverse. That’s the definition of a metaverse in our minds.

When people think metaverse, they think about the Facebook product and that this is a virtual space for us to get together and socialize, which honestly, I don’t think is necessarily accurate. What I think is the metaverse is a 3D representation of real life that I feel is missing in our day-to-day life. Even right now, we’re still having a two-dimensional conversation between each other and so much of what we do is in this two-dimensional space. Human beings are three-dimensional creatures. This is something that we inherently understand. It’s why Google Maps has a three-dimensional function on it. It’s something that we have available to us in technology. I just don’t feel like it’s been leveraged enough. Why do you think that is?

 

FSP 10 | Future Of Augmented Reality

 

That’s a very good question. I do have real-life examples. For example, I’m a very technical person in mind, naturally. My partner is an actress and she’s y the least technical person that I’m talking to. A very interesting observation I found by talking to her circle of acquaintances is that they struggle with reading maps. For example, pure old-school maps are very hard for them to realize where they are if they look at the map.

From the map, we go to the schematics. For example, the floor plan of the building tells absolutely nothing but once you start showing the digital twin, they start seeing the visual relationships. They see the walls and the windows. They start projecting that into the brain’s memory. In this case, I have the privilege to be technical enough and go through this hard bit of understanding all these schematic and planning things.

I get it. I have a suspicion I was trained to do this. Maybe I trained myself or I got it as part of my general training but to answer your question, different minds work differently and some minds go completely different paths during their life cycle. That’s why we find that our immersive metaverse products and augmented reality products click so well with a wider variety of people and digital twins as well.

Prior to digital twins, we worked with our customers and some of the people, despite the company being technical, still had issues looking at the floor plans. For example, to find that Plant Room A45B because the floor plans cannot necessarily correlate to reality, but once we start showing them digital twins or letting them experience things in the metaverse, we see how it clicks. It’s immediate feedback.

I noticed that too. The thing about human beings is that if we cater to what we have evolved to understand, then it clicks, and three-dimensional images and three-dimensional understanding and awareness are something that we’ve been involved to understand. For whatever reason, I feel like it hasn’t gotten the widespread adoption that I would expect it to because this type of technology has been around for a long time.

I remember the Nintendo Virtual Boy back when I was very young. This was an amazing product, but it never sold well. Even now, the Oculus is something that I feel is widely available. It has the capabilities to do all the things that you’re talking about, but for whatever reason, it hasn’t penetrated society the way that it should. You’re in the technology space. What do you think about that? Why is that?

Up until the latest Meta Oculus device, the main barrier was the price point. The device says were available, but they were not so affordable. The latest release by Meta is decent in terms of functionality and delivers everything. It’s the best device at the moment. It’s two times less price than it used to be. It’s still $1,500 or so. Still on a higher scale of things, but it’s now more affordable. It’s in a range of a premium iPhone for instance. More people will start adopting that. That was the first one.

We’re getting closer to breaching that barrier. That barrier is better breached when the economy of scale kicks in. To kick in the economy of scale, it’s a bit of a chicken and egg problem. The companies like Apple, Meta, and Microsoft, need to create an ecosystem for content creators. These people need to have an ecosystem where the development tools and deployment platforms are available. It’s the marketplace, as we’re talking about.

The times when the software of your creation or game you would create for the 64-bit system you could sell on a tape or on a CD are all gone. All we need is a marketplace like Steam for games or Netflix for TV series and etc. Companies like Meta, Microsoft, Amazon, and specifically Apple are exceptionally good at that. I do expect a lot of breakthroughs from the Apple event when they will release their augmented reality glasses because they will not just release the augmented reality glasses. They never do. Look at all their product launches. They always release the toolkit for the software developers, the marketplace for the software developers, whether it’s iTunes or App Store and etc. They give you the route to market.

If you compare us right now to us two years after the release, right now for me to get an augmented reality developer, let’s say Unity developer or Unreal Engine developer, I would either need to poach them from the game dev industry, and the salaries there are six figures starting, or not starting, but for a decent developer, six figures. In the UK, it’s the top end of the market in software development or I would need to poach them from a very niche industry.

For example, my lead Mixed Reality developer is an ex-Airbus developer of cockpit simulators for pilots. Fast forward 2 or 3 years after the release of the Ecosystem by Apple, every single university grad with the MacBook would be able to create his site project, deliver it to the market, earn some money, get experience, on the market, and be there available.

The contract for developers will go down. More companies will be able to create their products and deliver them. Also, the hardware will be more and more widely available. It means more production capacity. The economy of scale kicks in and that’s the rolling ball that snowballs. I’m telling my Mixed Reality development team, “We are like the Nokia R&D team in pre-smartphone times. We are doing cool stuff that very few companies are prepared to pay for but when it is exposed or when another Steve Jobs introduces that new device that truly revolutionizes everything, that’s when it’ll truly kick in and we will come with our experience and ready.” That’s how I say this.

That’s an interesting point because I never thought about human capital being the bottleneck. Now, I understand a little bit more about what is holding us back because I feel like when I look at technological advances, I always think it’s like the science, the technology itself, or even the price point that’s the log jam. However, the development of developers is what you’re saying. That’s the rate-limiting step when it comes to mass adoption of this technology.

Also, let’s say those who are on the receiving end of the technology, businesses will quickly catch up. I’ll give you an example that is very specific to the UK but very applicable to the entire world. The UK is obsessed with health, safety, and training. In any organization you join, you undergo training, inductions, health and safety training procedures, etc. It’s all paper-based. Digital doesn’t matter. It’s not augmented.

With lots of our mission criticals, our lowest hanging fruit in augmented reality is the training. We deliver training programs to our customers starting from equipment familiarization, going down to procedural training. They train on mission-critical equipment without damaging anything. They have absolute luxury and freedom of breaking things during the training. They’re even encouraged.

One of the best-augmented reality applications I’ve seen in construction was the Working at Heights Training where usually, you take someone. You strap them and you explain to them all the caveats and difficulties in the dangerous environment so you need an expert to be there present. There’s a risk to start with and they have a good virtual reality simulation software that does this. They’re a full-trade and the incident rate went down immediately as soon as they adopted this technology.

I met them at the award ceremony where they got and rightly so got an award instead of us because they’re so superior and a good application. Back to my point, the janitors are to be trained in augmented reality, but for them to be trained really, all those companies need to first of all, know that the metaverse is not that bad stigma thing that we discussed. They are aware. They have Facebook gimmicks. It’s the actual practical tool that is affordable. They can get it from a number of vendors.

All those companies need to know that the metaverse is not the bad stigma thing, the Facebook gimmick; it’s an actual, practical tool that is affordable. 

Moreover, it’s like with the software right now. If Bob, the janitor, needs that software, they buy him a cheap Android phone. I see the metaverse being like this in years to come where, “Bob, the janitor, needs training. Let’s get him that cheap augmented reality set of glasses. There is the software that we have on a subscription or enterprise plan. Let’s get him trained. He will get trained faster and quicker. We don’t need to pay a lot to persons hold this knowledge and train the trainers.

Also, put him in dangerous situations like you’re talking about. That app sounds amazing. I don’t know if you’ve ever had the experience of putting on a virtual reality headset and having the image of a high place being put in front of you but they have this experience at a lot of these virtual reality arcades where you’re walking on a beam and you’re looking down at a skyscraper. It truly feels like you’re going to fall. I do feel like there is a very powerful experience when you’re talking about that app that is available. What are the apps that are getting you excited about the virtual space? It’s because I do feel like that specific app had a profound experience for me and it got me excited.

I’m very excited about the app that we are building personally. Before I deep dive into that, because that’s the one I can talk indefinitely about, I’m excited about the prospects of a few apps. One is the cooking assistant. You are wearing the augmented reality glasses so you see the reality. You are cooking and you want to have that recipe and the timer probably has some temperature information but you generally do not work with temperature information in real life. It would probably be nice to have.

However, if I could get my timers on the things that I’m cooking and my recipe available, that would be already a great enabler for the popularization of augmented reality. It even sounds awesome. You can add to that some integrations. For example, I worked with one company from London UK. They are focused on the fridge cameras and they’re working now to make sure it’s part of their new product range.

That’s a camera with image recognition that integrates with the grocery stores for product identification. They scan all the time. Every time you open the door, the light gets turned on. They see what you put in. They try to work out the expiry times. They are working on advising you on the recipes you better cook now. It’s not only from the nutritious point of view, although they have a nutritionist working for them, but also from the waste reduction point of view. You get that information. You have the recipe crafted for you. You have this recipe then floating or being stuck on the ball in the augmented space you’re cooking.

It’s such a very straightforward thing. They’re teaching robots how to cook right now. The robotic appendages have the ability to make food. As humans, having a timer and allowing us to do that would be a very fun and interesting easy app to make. That’s a low-hanging fruit for the consumer technology space, which would be interesting but tell me about what you’re working on. I would love to hear about that.

We are from the smart buildings environment. Our customers would be already quite smart and thoughtful about how they design and build their buildings. There’s a lot of information available from the start. We are maximizing the information repository that we built for them, the static information, and the life information of all the connected systems.

I’ll give you an example of our data center customer. We are working with their critical environment and operations emergence of response teams. These are what I call boots on the ground who respond to any emergencies that are happening with the critical infrastructure. It could be the server showing bad behavior or something as bad as an oil spill from one of the oil tanks for example.

I keep calling it multiplayer as I love games. It’s a true multiplayer experience where you have people in control center in front of their screens. There are some subject matter experts probably offsite and probably even off the island or off the continent. There is the response team and they do need to exchange information. For example, a guy in the control center is no longer seeing the fuel level from the generator Room A so he sends someone out with the lens so everyone could see what this person sees.

For future reporting as well, it’s being captured and he can engage the screen capture when needed. He could run there and see the analog gauge to see what the fuel level is and assess the situation. He needs to start the generator in Room B. He starts that but that room doesn’t give him the information from the energy distribution system because what you would experience when you start the generator is your amperage would go up on this circuit and you need to shut down another circuit. However, he needs to run between the rooms basically.

Here, we can provide that live telemetry straight into the generator’s room and we also track the position of the person inside the data center and all the colocation spaces. We know that you are in this generator room. It feeds that electrical circuit because we know how the building was built. “Here you go. That’s your electrical distribution telemetry data straight here. Start your generator.”

If you don’t know how or if something is wrong, here is the operations manual for you. It still doesn’t work for you, “Shall we call in the experts?” We call in the expert company and they see what he sees or she says, they point and draw. It’s a very futuristic experience. When you see someone in front of their team’s screen, they draw an arrow pointing to the part of the generator but from the person wearing the lens, a 3D arrow appears that points to that part.

That’s what people think about when they think about near-future technology. In James Bond, you have to get to this part of the hostage situation. Q is drawing the wayfinding and he’s giving you all of the information for the vault opening up and stuff. I feel like that’s something that I look forward to. I’m very excited about what you’re making but I had a couple of questions. What kind of sensor technology is available for buildings now?

I don’t feel like temperature would be something, cameras, and all of the different areas that would be able to be routed. How do you keep all of that information? I feel like it would be a different software that’s running the temperature system and the camera system. You’re taking all that information. How are you doing that?

You’re right in everything you said. Let me give you two completely different examples and those examples pretty much make up our real life and the buildings around us. You can have companies that build the building for themselves to run. Their goals are very long term and they want to max out the operations of this building. It can reduce the running costs of the building.

They invest in the design stage and they walk with companies like Johnson Controls, for example, to make sure that their systems are fit for the purpose long term. Also, interoperable with the systems and other buildings they have. They don’t need to train people in different buildings differently. They think smart about this. In terms of what they fit in the building, very frequently, in all examples, it would be driven by their present operational needs.

They know that their building needs an access control system and needs a CCTV system. For example, it’s a premium office space. They say, “We would go for indoor air quality monitoring to get a higher certification with one of the world’s leading certification authorities like WELL or LEED and they quite well specify what.

You hear that this is a LEED-certified building and this is a carbon-zero building. Those would have more sensor capabilities for you to put into your little model, correct?

Exactly. Frequently, these will be individual systems, sophisticated, good for the purpose, and not necessarily they will be interconnected for some monolithic. Typically, they’re not. They’re isolated siloed systems and you would have some control room in the building. It’s where you’d have your building management system, your fire suppression system, and your air quality system, and that’s fine. That’s a good case scenario where we then come in and digitize it and unify everything into the digital twin.

How we do that is we work with vendors. Our customers are building owners and operators so they have a very strong buying power with those vendors. They introduce us to them and say look, “We own the data.” Fortunately these days, laws support us like GDPR in Europe and data protection laws in the US. They clearly say the owner of this technology who bought this system owns that data.

That’s all great that you have your cloud services, but if the owner says, “I want this data to be sent to that company for digital twin.” They are in all rights to do so, usually, and very rarely, we see any rejection. These people work long-term. They need to service those buildings and the last thing they want is for them to break relationships with the customer. They assist. Usually, most companies have APIs, such as under means of machine-to-machine talking between systems and we integrate with a range of APIs.

We also perform some gap analysis. This is a real case of our customer in the US in Atlanta, Georgia. We’re not sure whether they need to move to the larger office or build a new office because, after the pandemic, people were giving them mixed signals about whether the office is too crowded or not. It depends on whom you ask. We installed the desk occupancy monitors, which is a small infrared sensor that is installed under the desk. It has a battery inside sufficient for three years. It talks over the radio with the gateway device and, the binary logic there is, “I see a warm thing.”

Are you guys creating these sensors or is this something that’s new existed?

We performed the market assessment. I’m coming from an IoT background so I’m quite familiar with the majority of it.

For those of you who don’t know, that means the Internet of Things. It is how these different sensors or different machine parts communicate with each other.

A typical sensor is nothing more than a small printed circuit board with few components, and few sensors, whether an audio sensor or air sensor, you name it. It has some communication devices installed. In our case, it’s a radio antenna and a battery or a power store. That typically is the sensor and that’s typically what IoT is. Anything less sophisticated than the mobile phone gets a tag IoT. It’s a device.

You’re making those yourself or are you getting a vendor to make the sensors?

At the moment, we partner with a variety of vendors. We have our preferred suppliers because every single vendor needs them, although there are common interfaces. How do you change the data? Every single implementation is a bit specific.

The reason why I’m asking is that let’s say you work with a vendor. The occupancy rate of a building is something that you set up this system to measure the occupancy rate. That sends a certain amount of information to you. That’s one sensor system. Let’s say the temperature sensor system is another software and that sends information to you. You are the person that translates that information.

You put it into that digital model. How difficult is that? Not only how difficult is that, but is there a further specialty in acquiring that sensor information? This is another hypothetical. Let’s say I’m in a hospital. I want to make sure that I understand the ventilation system and that might be a certain vendor. If I am in the medical space, do I need to be a specialist to learn that or is a company like yours able to easily learn that because, “It’s this type of information? We can integrate it like this.”

To answer that, I’ll put a reference to data mining and data analytics. If you talk to the data scientist, they would say that in my job, 80% of the time, or a good portion of time, I spent cleaning the data. I am making sure the data is I can work with. The rest is my analytics of the data. We realized that in the design of our digital twin system, we perform this data homogenization or sanitization at the data entry.

We receive the data and let’s be honest, those come in all shapes and forms. We do have the technical know-how to receive this data, whichever format or transmission technology is used, but then we don’t simply dump this data in our data warehouse or database and say, “That’s the data.” We start making sense of the data at the data entry and acquisition. That’s where we start making references to our architectural model.

We say, “We see it’s the sensor A, B, C, D.” We check the model. “A B, C, and D is in Room 505. Let’s link them together.” It’s called data parsing. We parse the data and then we start already attributing this. “The latest value is the CO2 and this is the rotation per minute of our HVAC system.” We start cleaning and cleansing the data when we receive it. That means that our data analytics from the start works as the clean data, which is also the relational data model to put the relationships.

We do understand our customers and our end users. Specifically in the very beginning, I mentioned that the built environment or our buildings industry is behind technologically. Hence, the users are not that technologically advanced in the data or draw the correlations themselves. We understand that well. Typically, with any customer I work with, I set up a focus group across departments to understand their operational needs, and their long-term and short-term goals to make sure that the reporting, dashboarding, and alerting made sense.

That’s critical because they are at the point where they have not yet adopted this technology. No one did. It’s new to everyone. It needs to be easy. Here we follow the Apple Guidebook where the technology needs to be easy to use. I rather limit functionality but make it streamlined and easy to use than confuse the person with a lot of things they need to correlate and they eventually give up. What I’m telling my team, the product design team is we are there to make their jobs easier, not harder. That’s enough. This is why we are there and that’s the scene on our design and user experience decision.

What I’m understanding correctly is you get all of this data and you homogenize it. Whether it’s architectural data from how the building looks through the temperatures to the CCTV to all this stuff, you homogenize that and you make it into a package that is easy to understand and is based in three dimensions because that’s inherent in the benefit of the augmented reality or virtual reality. Which one of those is harder? Is it harder to homogenize the data or is it harder to create that model?

The hardest is to create a model that is useful to the end user and to the receiver. This is the hardest bit because homogenizing the data, that’s software development. We have professionals there, we know what we’re doing. We can do that. Creating the models, displaying the data, and augmented reality, we have passionate individuals there because most of my augmented reality developers are in the industry because of their passions for gaming. They are a creative bunch.

Creating something that looks good, is slick, or is very interesting is a given. It’ll be like that, but then putting it in the context of a non-technical, let’s say not a gamer, not a person’s first experience of using augmented reality, that’s the track and not always very successful in getting this first time right specifically and also, limitations of augmented reality.

Some people feel dizzy. Some people don’t know. They forget how to use their hands when they are in the augmented reality space. Some people buy it quickly, but the user experience is the hardest part of this job. You put the glasses on and it’s so easy for you to use them. That’s by the way, putting back a reference to this ecosystem that companies like Meta or Apple are creating.

If you recall how the Apple iPhones were released and when we started using their very first time and through the first years, recall how consistent the experience was. Your swipes were always doing the same thing regardless of which app you are in. You knew how to close the app and it’s still like that and it didn’t deviate. It’s a very consistent experience still because Apple is strict with their design guidelines from day one of releasing the technology.

This is what I’m looking for at the moment. My team is creating the design guidelines for this user in its heart. We set a lot of workshops with people. We dive into the human metrics. How do they do things? It’s hard and we would like to make great software. We have to study all that. When someone like Apple would release their ecosystem, it will be considered and I can pretty much guarantee this. They did it with earphones and with watches. They will do exactly that with the augmented space.

I then want a person who first time put the glasses on in their late 60s for example, to know how to pinch, how to point, and know that swipe does that. Apple will enforce me to do this. It’s my fault if my users will give me bad feedback but at least, I’ll have the rule book. At the moment, I’m building the rule book. It is a bit hard. That’s answering your question. The hardest bit is making sure that the user experience is correct.

The hardest bit is making sure that the user experience is correct.

When is the Apple augmented reality or virtual reality set coming out?

It’s been pushed back a few times already. At the moment, they are banking in July. There is a big Apple developer’s conference in June or July 2023. We’re looking forward to that. If not, there will be a new product launch in October or November. It’s cooking. It’s brewing. There are leaked prototypes, schematics, and patterns. It’s a real thing. There are plenty of people working in the space of course under strict NDA. Something’s coming. I am looking forward to that.

What’s the price point for that? Do you know? Are you familiar?

It will be expensive. It’s $2,000-plus. It’ll still be a premium product. iPhone was and still is quite premium. I have trust in these guys. They’ve done it before. Let’s put it this way. If someone can pull this out and create an ecosystem, it’s going to be them. Microsoft already did that in terms of the HoloLens and the entire backend infrastructure because Apple is great with the software development bits and pieces, the marketplace.

Frequently, you will need cloud-based servers, content delivery networks, some frameworks, and software development libraries. That’s Microsoft’s field. They already are well-established. They made themselves devise agnostic, which means they will work with anyone who comes out with their product, whether it’s Apple, Samsung, or Google.

Most likely, they will use their technology. Directly or indirectly, they will. Microsoft already set the scene that was a very good precursor for everything. We are waiting for Apple. You never know. Maybe Google will announce something as they did Google Glass ages ago, which was a failed product. It came a bit too early and didn’t have that ecosystem.

Of the peripherals that are out, I feel like we can talk about some technology that exists to establish a spatial relationship and turn that into software. When I think of that, I think of the Oculus. Honestly, I would put Google Glass in there even though it didn’t have widespread adoption. Microsoft had a camera augmented reality. What was that? It was a peripheral for the Xbox. I forgot the name.

It was plugged into the Xbox One. You didn’t have to have it available. It was automatically ingrained but regardless, I remember having loads of fun with my family with just dance and all these spatial relationship video games. Which one of those is your favorite? What do you enjoy using? What are you using in your product? Are you using these things or are you using a 2D representation that turns into like a tablet that becomes 3D?

We are using the Microsoft HoloLens 2, which is quite an old device technologically, but it has a number of very strong points. Let’s say up until lately, you either go VR for Virtual Reality or you go augmented reality where you project something on a glass and your retina seizes this. That has changed with the recent Meta Oculus device because they introduced something that they call a passthrough mode.

The passthrough mode with the increased resolution of the screens because you do have screens right in front of your eyes. Those screens became good enough to give you a full augmented reality view. The cameras on the outside stream the video signal into those screens and you say it like you do not have a visor in front of you. That experience is truly mind-blowing. It feels like there is no plastic in front of you.

It’s very hard to say whether it’s a passthrough or the actual reality. It’ll be called Mixed Reality from now on because you will be able to switch between full virtual reality where no reality is coming into your eyes to augmented reality by means of this passthrough. We’re using the HoloLens 2 because that’s currently the only and the best available device for augmented reality that has sufficient processing power on the actual wearable.

There are really good augmented reality products by Lenovo and HP, but because they struggle to power it for long enough, they outsource the processing, which consumes the power processing and transmission of data are the power hungriest things on a device or any device. The simpler the device is, the less power it consumes. If you turn off the GPS Wi-Fi and Bluetooth on your phone, your battery will last longer because modems consume a lot of power.

The simpler device consumes less power.

They have done simple glasses but all the processing and communication is happening on your phone. You have a wire coming into your pocket, which is not very convenient. That’s why we’re working with HoloLens 2 device. There won’t be a HoloLens 3 in the observable future but we designed our software to be device agnostic. It can take any device. It will work on any device. HoloLens 2 works well for us.

I’m going to a company called VisAR. They’re pioneers in the augmented and mixed reality field. In the UK, the Microsoft Gold partner is a real smart cookie. I’m going to experience all the newest available devices to see which ones are the best just to make sure that if we need to switch or offer customers a different product, we select the right hardware. There is plenty of hardware available and it’s used in niche applications. It is quite a lot in indication and a lot of universities are experiencing and experimenting with augmented reality.

Some mission-critical companies are also very interested. Let’s say oil and gas exploration, it helps them a lot because being hands-free in the hazardous environment is a quite interesting concept. There is plenty available at the moment. Every single one has pros and cons and people take different takes on how the technology should go forward. It needs someone big to come in and set the rules and say, “No. This is how we’re going to do it.”

Do you have an Oculus at home?

I don’t have Oculus at home yet. I am yet to get one. That’s on my list.

I haven’t made the leap either myself. I was so close a few months ago because I wanted to do this show but do it completely in the metaverse. I didn’t know if there was a market for that and I feel like there’s been a huge drop-off in the metaverse adoption. Do you feel like that or is that only my naive thought about it?

It did happen. In fact, there is a drop off in interest in it and there are a lot of rumors around, but if you look deeper, the natural thing has happened. What has happened? There were significant layoffs in technological companies at the end of 2022 and the beginning of 2023 as a result of the post-pandemic adaptation to the market. They overhired and etc.

One thing happened. Microsoft, which was the front runner of the technology, with their HoloLens 2 has pretty much completely stacked their virtual reality development department. They were developing some toolkits like MRTK toolkits, which pretty much everyone used. It’s now open source but they haven’t confirmed that there won’t be a HoloLens 3 but they didn’t deny that it’s also canceled.

A lot of people took it wrong but we work with Microsoft. We are their partner and supplier. It’s not exclusive knowledge but the general description I received from the teams in Microsoft is they don’t want to repeat what happened with Windows Phone so they’re no longer willing to work with hardware. They don’t want to write the software for the hardware. They will provide the infrastructure.

It was such a shame because, honestly, I love the Windows Phone. I was the biggest fan. I thought it was so forward-thinking. It was a phone without being tied to a phone with the live tiles, what a great idea. I’m sorry. I’d love to talk to you for a lot longer, but we are getting to the end of our time together. At the end of each show, I do ask each of my guests three questions that I’ve been thinking about.

You can tell that I’m trying to listen to you but in the background my mind, it’s going like crazy. I have a ton of questions to ask you. In the first one, we hinted at near-future science fiction. When you think of science fiction that’s applicable to your field, what are you thinking of? What gets you motivated in the morning?

There are two things. One is more realistic, one is less realistic and quite scary. One thing is I do want us to be more efficient and free from dual activities and that’s where I believe technology helps us. We see a huge effect that ChatGPT or Midjourney is having on us. These tools I’m using personally to unload all the repetitive boring tasks that I don’t like and no one does.

That’s how I see us in the future. Technology, whether it’s neural networks or augmented reality, makes us focus on what we are good at, best at, or want to excel at. Different people will excel at different things, but the main direction will be you want to focus on something great. The rest will be taken care of by technology for you at no huge cost and at an affordable price.

In the future, technology, whether it’s generative neural networks or augmented reality, make us focus on what we are good at, best at, and want to excel at.

That’s what drives me forward and a part of this change, a part of this transformation, at least, I want to think this way and that’s how I see the future. The second comes out of the first and it’s also very motivating and also, very scary. With us becoming more efficient and that leads to efficiency not just at your own wellbeing. It’s like, “I’m more efficient at home. I don’t need to, after work, spend much time doing nonsense tasks.”

However, it also makes us more efficient at work. Specifically, it’ll make us more efficient at work first because that’s where the technology gets adopted mostly. With us being more efficient at work, there are two ways forward, but which way we will go? One way is we will be able to do the work we need to do in less time. We’re talking about like a 4-day long or a 3-day long where work as much as needed to deliver the things you need to deliver the type of thing.

The rest is for your creative mind, for your leisure, etc. That’s where I would like us to be. We can be so efficient as a society, as humanity that we don’t need to work that much that hard anymore. The second path leads to stuff being made redundant, and less ability to join the junior positions. More homelessness, more joblessness because you don’t need that many people at work anymore.

Take me for example. I work on average 40 hours per week. In Europe, we work 40 hours per week. You’re saying you became ten times more efficient. You will do 10 times more work in your 40 hours and we will hire 10 times less people or in fact, we’ll also sack 10 times more people. That’s the other path where we could go and that is the scary one.

I prefer to think of myself as an optimist. Human beings are going to rise to the occasion and it’s going to make us more effective in what we’re doing. To get to what I was alluding to with the question. When I think of the science fiction that inspires me, Star Trek is one of the ones that I look at as the most utopian and optimistic vision of the future. We have artificial intelligence. We have all of these things that make our life easier, but it allows us to live more full and rich lives.

That’s something that you pointed to in the beginning. I do think that we’re on the cusp of it, especially with our phones in our pockets. It’s only a matter of time before we get to the point where we’re using them as a guide to maximize our time per day in the happiest and best way possible. What’s some science fiction that inspires you? It’s because I feel like that’s something that we had talked about.

I would say, Isaac Asimov and Robert Heinlein.

Asimov is the benchmark for an optimistic view. Everybody’s so worried about artificial intelligence, but the three laws I thought were so well done that it’s like, “This can be a benefit for society. Robots can help us.

It gives me this sense of optimism. There is a grimmer picture drawn by my favorite authors. The Moon Is a Harsh Mistress for instance, where things didn’t go that well or just in general. I am more of an optimist as well. I truly want to stay an optimist. To be honest with you, it checks out. We are still going in the right direction. There are hiccups. There are bumps on this road. Generally, we’re doing better as we go. Specifically, if you look at the wider landscape for decades or centuries, we’re living in the happiest. If you watch the news, we’re living in an absolute bad time but generally speaking, we’re living in a good time.

Let’s go to my second question. I feel like I’ve always wanted to have virtually the ability to explore space, which I feel is something that is a low-hanging fruit. We have all of this data about the solar system and our position in it and everything like that but I haven’t found a good app for that. Is there a good app that you know of, or if not, how do we make that happen, Dmitrijs? I want to make that happen.

Me, too, and not just the space. It’s any experience. It could be good experiences or it could be bad experiences like near-death experiences to trickle your brain and etc. Let’s say in the game Cyberpunk, there was a whole narrative about recording someone’s memories and playing them back to you and selling on the black market some bad memories or good kinky memories and etc.

The answer I’m alluding to lay somewhere there is the moment our inability to experience a lot of things is based on our geographical presence. I’m here in Manchester for me to experience a desert, for example. I need to board a plane and go to Africa to Morocco or Tunisia and drive somewhere and then I’ll experience it. It’s too expensive and too much time. I’d like to experience it from my home, but then spaces alike, just slightly a higher price tag.

 

FSP 10 | Future Of Augmented Reality

 

My alternative is to use a medium. At the moment, we are limited by the medium, whether it’s my phone or my computer screen or my virtual reality, which again, a medium is better. Probably, the best medium that we have for experience propagation but I would go for the neural chip in my brain. I need to go straight to the brain and take it with the brain. I think that will be the answer.

We will be able to send a drone to another planet, do the recordings of all the sensors, and then play them back straight away to your brain. That’s what we do as the medium. We play them back to our resonant to our ears. We can’t do it to the nose. We don’t have the smell generators available to us, but we tickle the sensors. The brain does the heavy lifting.

From what you know, there’s no space exploration app on Oculus yet, do you know?

Not really. There are no good ones. I’ve seen some education from the education sector and a few apps, but they are good not for experiences. They’re good for education just to see how the system is built, where the planets are, and how they’re roughly compared in size, but not what they experience. I think we think alike there. I would like to drop on the couch, put the VR glass on, and think that I’m on my CSS Enterprise deck exploring the work. Jump, we’re not there yet. Eventually, we will.

The last question is where do you see augmented reality and virtual reality or even your own industry in a few years? Where do you see us being? Let’s say Apple comes out with an amazing VR product.

Personally, I advocate for this and I lead my teams. It was the following narrative. In a few years’ time, mixed reality will be a commodity. Everyone will have a device at their disposal, whether in their pocket or an employer will give them one or they’ll have it at home, but it’ll be another medium that people can use. There will be talented people coming up with good ideas on how to use this commodity easily. For example, for cooking, for fitness, for driving, you name it. There will be someone who comes up with a great idea. There will be a whole distribution chain for this software. The hardware will be affordable.

In 10 years’ time, mixed reality will be a commodity.

I didn’t mention it before, but batteries are a very limiting factor at the moment for any wearables. Apple Watch is a good example. It’s fantastic technology. You need to charge it daily and that’s a boomer for a lot of people. It’s a big disabler for the wider adoption of smartwatches. The same applies to augmented reality.

Once all these things become better, it’ll be just the thing part of your life that in a few years’ time, probably you won’t even recall or you will think about you not having that was like the Stone Age. I think about the mobile phone and that’s what I’m telling everyone and that’s where I would like to find us actively contributing to that as much as I can from here to Manchester in the UK.

Good luck to you. It was so fun talking to you and I look forward to this future that you are talking about. I wish you all the best of luck in creating it and from The Futurist Society, we are signing off and we’ll see you next time in the future. Have a great one, guys.

Thanks a lot.

 

Important Links

About Dmitrijs Jurins

FSP 10 | Future Of Augmented RealityA modern business leader, visionary systems thinker, and technical management advisor. Dmitrijs is passionate about the full-cycle IoT and Edge hardware product creation as well as SaaS and Enterprise system design and development. He is currently focused on Digital Twin implementation for connected devices and Metaverse data representation.

0 Comments

By: The Futurist Society