1. Home
  2. VPN / Privacy
  3. Cybersecurity and the metaverse trend micro bill malik interview

Cybersecurity and the Metaverse with Trend Micro’s Bill Malik

We had the chance to speak with Trend Micro’s Global VP of Infrastructure Strategies, Bill Malik, on the topic of cybersecurity and the metaverse. 

Watch the 3 part interview with Bill below (or read the transcript) for his insight and perspectives on what we may expect as the metaverse becomes part of our daily lives. 

Bill Malik – VP Infrastructure Strategies at Trend Micro

Bill has 40+ years of experience in IT spanning from research director and manager at Gartner to OS developer, tester, and planner with IBM. He also had his own consulting practice where he provided solutions for information security, identity management, disaster recovery, and large enterprise solution architecture services for major clients like Silver Lake Partners, Motorola, and AIG.

His wealth of experience in IT security enables him to help clients across all avenues of information security, from the technology endpoints, networks, servers, cloud, and Internet of Things, to policy and procedures, to impacts acquisition and development through deployment, operations, maintenance, and replacement.

Bill is an MIT alumni, member of CT InfraGard and ISACA, has 160+ publications, and is the brilliant co-host of The Real Cybersecurity Podcast.

Transcript of Interview

Nathan: So today we’re sitting with Bill Malik, the global vice president of infrastructure strategies for Trend Micro, a multinational cybersecurity product company. Bill’s four decade long career in IT spans from OS developer, tester and planner with IBM to CTO of an identity management vendor acquired by sun Microsystems to his own consulting business, where he provided information security, disaster recovery, identity management, and enterprise solution architecture services for major clients like Motorola, AIG and silver lake partners. He was a research director and manager at Gartner has over 160 publications and is a member of ISCA. Bill, welcome.

Bill: Thank you. Happy to be here.

Nathan: Great. So today we’re going to be talking about cyber security and the metaverse. But before we jump into that briefly, how did you get started in cybersecurity?

Bill: Well, the IT field was always fascinating to me. I did some work on that during school and then on graduation my first full-time professional job was as a programmer and my career has basically been driven by, where’s, where’s the biggest problem that we’ve got. And while I was, developer of the biggest problem was the stability of the systems.

And so I got into operating systems, so that eventually led me to, work at IBM. And while I was at IBM, the biggest issue seemed to be at that time, coordinating the, build of the products. And so I got involved with the, build and test group, just assembling the 3000 plus parts that make up the OS. Well, after I left there and went to Gartner, the biggest problems seemed to be security. And so I came up with a pitch called enterprise-wide security and I delivered that a lot. And eventually the leadership at Gartner said, you will be the manager of the security service. And I said, well, I can think of a couple people better, but eventually I, I did.

And, pretty much stayed close to that ever since, but it’s a, it’s a fascinating area. I tell people I’ve been in IT a long time. There has not been a week that’s gone by that. I haven’t learned something new in the information security space. It’s an endless fountain of fascination and things to learn and do.

Nathan: Absolutely. And, and so, bringing it back to the, the metaphor, which might be one of the newest things that’s kind of up upcoming, if you think back when it comes to information security, Have the concerns and strategies changed much since you started, from documents, now to the metaverse.

Bill: Well, every year or two, we seem to come up with a new way to, wreck the security structure that we’ve made almost good enough. Right. We got almost good enough at standalone mainframes and then they started networking. We got almost good enough at networked mainframes. And then we introduced PCs. We got almost good enough at that. And the story well, the last few years have seen an explosion in social media types, which have brought a couple of problems.

One of which is the trustworthiness and reliability of the information. Another one of which is the ease with which those platforms, grant bad actors, access to individuals and, corporations. The metaverse just sort of amps that up a bit more. And so I believe there are legitimate reasons for concern, about how things will go in the, in the metaverse.

Nathan: So if we go a bit deeper in terms of, the ways of securing information, how that’s changed, are there any specifics that would be easily explainable or, or, or that you’d be able to explain to us ways of securing it, maybe the types of concerns, that sort of thing.

Bill: Well, the first problem is how do you prove you are, who you say you are when you’re talking to. Right now that typical answer is, well, it’s got to be something, something you have or something you are, or if you’re of a certain age, something you forgot something you lost or something you used to look like, the fact is establishing identity is really tough. There was a case just last week about a fellow who had managed to convince his peers, that he was in fact, an AI generated avatar online.

And it was very tough to figure out if that were true or not. A colleague of mine at Trend came up with a series called project 2030, which looked forward at a world where, immersive holographic experiences allowed us to well, in one case have conversations with people who had passed on in real life.

Because we’d captured enough information about them that we could reconstruct how they might participate in a conversation that trope was used in a science fiction movie, a couple of years ago as well. So, yes. How do you secure that stuff? Well, from a criminal’s point of view, right? You have somebody, a buyer put in commercial terms, who’s got money and some kind of a need for a product or service.

You’ve got a seller who has intellectual property and a product or service that might meet the buyer’s need. And then you have an electronic platform connecting the two. Now, if you’re a bad actor, you’re going to say, well, I want to get this guy’s money. I want to figure out what it is they want, and maybe I can, slip him a fast one and sell him something that’s almost good enough, but not really.

You look at the seller and say, well, maybe I can steal their IP, or maybe I can, Taint their product. And then you look at the channel and say, well, maybe I can intercept the traffic and, change a message. , so distrust, well, what’s the motivation for that? I think the typical motivation that the three letter agencies use is the acronym mice it’s money, it’s ideology, it’s coercion or it’s ego.

So for some reason or another, a person is going to go in there and try to damage stuff. Now, it used to be that the crimes that we’d hear about were nuisances and annoyances somebody would hack a server to put their logo on it, hooray. I conquered this mountain but then it became more focused on money.

Once upon a time, ransomware was just, spray and pray. They deliver it everywhere. and if they got, grandma’s photos of the kids. They’d encrypt them and she’d have to shell out a hundred to get access to them again. Well, that kinda high volume, low margin stuff. Isn’t very fun. So the bad actors got smarter and started going after targeted events.

And so you had business email compromise, the contemporary versions of ransomware and the newest twist is of course, political actors, a prop that kind of stuff. Now here’s where this comes to the metaverse. The more you know about somebody, the easier you can sell ’em something, or you can mislead them in order to operate in the metaverse.

I have to have not only a very rich, real lifelike experience, but the technology that’s providing me that experience has to be constantly aware of what I’m doing. My reactions are and so on, which means that as I am playing, you know Gris, I will be constantly monitored in that environment. Here’s the problem we know from studies going back 50 years or more, that people actually perceive stuff before it rises to the level of consciousness.

You can take a one frame image of something scary and tuck it into a horror film. And that one 30 of the second image will cause people to feel unease making the horror film, even scarier, even though they don’t know exactly what it is well. So how does that play out in a metaverse environment? I’m playing the game of Gris, I’m teaching this woman how to generate music so she can get her colors back.

And every once in a while, an image flashes up and it’s an image. When I see the image of the sheet, again, not consciously, but below the level of perception my micro-expression goes, ah, that’s cute. And then the advertiser or the political action committee or whomever is running this thing says, ah, he has an attraction to sheep and then they put up an image of a, a cow and I go, not again consciously, but they now know that I’m attracted to sheep and I dislike cattle.

So what does that mean? It means when I’m looking at a video of an debate between two political candidates, while I’m looking at candidate a, if somebody could stream an image of a sheep in just for the briefest, second, I’ll go, ah, she seems like a nice person. And while candidate V is speaking, they sneak in an image of a, of a cow. It’s like, ah, he’s creepy. right. So I am potentially being manipulated by stuff that I can’t even see.

Nathan: Yeah. That’s incredible. That kind of, bad actor that runs much deeper than I would’ve expected. what the metaverse could bring in terms of manipulation and influence – I wasn’t even thinking about that direction.

That’s, that’s very eye-opening. If you imagine the metaverse and kind of what the end vision of what a metaverse would look like, when it’s fully realized and everybody can use it. What do you think that might look like? And then, and you’ve touched on a lot of these – possible threats and, and issues that can arise, but what other types of threats do you think would be possible in that fully realized metaverse?

Bill: Well, the, the nice thing about this conversation is we get to talk about what the future might look like. And for me, when I think about the metaverse, the reason, Facebook had to use that name was because in 1992, paramount trademarked the name holodeck, right? Because that’s what the full function metaverse would be.

I mean, it’s not only visual and acoustic, but there are, aromas and feelings and textures, you can be sipping a drink on a porch in Riza, right? Well, when you, if you look at the uses of the holodeck in, the Star Trek genre, what things can go wrong? oh my bad guys have taken off the safety protocols, right.

Or, yeah. captain, we can’t recover control, a powerful intelligence is guiding this, or, it has taken over this thing and they’re now using it as a way to, enslave holographic semi beings. Well, that, that dystopian world, the neat thing about the Star Trek is that, the Intrepid crew always pulls through at the end, but here we are about to enable something like the holodeck.

And we don’t even know what the safety protocols are. And the people that are putting it up there are not doing it for the betterments of society or even, the entertainment of the participants. We are there as potential consumers and the people that are putting it in place have already shown their ability to make billions of dollars by gauging what our desires are, what our fears are, what our secret shame and guilt is in order to drive us to buy more soap or to vote for this candidate, rather than that one.

So putting it together the end game it would be nice if it were something like a star Trek universe but having large commercial interests again, if, if you’re not the one paying for it, then you’re the product and right changing that is going to require substantial rethinking of what it is we’re using the internet for.

Nathan: Right.

If we think about the technology as well, to get to something like a holodeck, we’re going to have to potentially go through the phases of something more akin to a Blade Runner, dystopian, Ready Player One body suit type of, scenario. there’s already kind of augmented reality, machinery in, in kind of industrial, internet of things, types, technology and machinery that’s being used.

Sure. If we, if we talk about it closer to our present and kind of the steps before we get to something fully detached, like a holodeck, what kind of real examples do you think, we could anticipate, in how they could be, the threats, the threats to them, how they could be, breached?

Bill: Well, I did a presentation about a year and a half ago on digital transformation. And the use of augmented reality, played heavily into that vision. So one scenario is the farmer who puts on his AR glasses and takes a look at his field and the glasses tell him, these crops in this area seem to have an infestation of, so, and so you need to apply this pesticide. this area seems to be getting a little less rainfall than necessary.

You might need to increase your irrigation parameters here. these guys are coming in early, you probably want to check your equipment for a harvest. Now that the thing that digital transformation does and what the meta. Will hopefully allow us to do is to have that information about, what’s the health of the crops.

Well, hopefully you’d have a connection with a firm that knows how to make pesticides safely. And they’re the ones giving you the analytical data. Hopefully the information about rainfall comes from, again, a trusted source, maybe a national meteorological group and the stuff about harvest and, machinery might come from, a vendor who’s, trucks and harvesting gear you use.

The thing here is it all comes out on one platform because of agreement on APIs and the farmer decides what it is he’s going to see now, how can a bad actor mess with that? You can put the wrong pesticide on the crop and it goes down the drain, stuff like that, or, waste motion, bad, bad action.

Similar scenarios apply to say the maintenance of heavy equipment. the technician comes in, puts on the VR glasses, takes a look at the, pump and up come the specs here. And there’s a list of the most common faults. And over here is a box to click. If you want a five minute tutorial on how to change the gasket, you know and this person becomes an expert.

Well, they’ve done it before, but never with this particular machine of this vintage. So they become an expert right here and now at solving that. So there are tremendous, potentials for improved productivity, greater job satisfaction. I now really know what I’m doing. smart cities make use of this in order to make the lives of residents and visitors, shoppers, Workers to make their lives easier, to be able to get emergency services on site quickly, for whatever kind of problem, to route people around construction, if we need to fix a main on this particular street and so on.

So yeah, there are significant upside potential, but every one of those opens a door for a bad actor to say, trouble ahead. One of the jobs I had as an independent was doing an audit for a department of transportation for one of the Northern states. And I found that there are enunciator boards, the things, the signs on the side of the road at that time, that information was nothing more than a text message and, and the security was that nobody knew the phone number.

It’s like what? This is 2000 and something you guys heard of war dialing, haven’t you? So anyway, they, they fixed that up, but you’re driving along and you see a sign that says, happy birthday Snookums, your honey bear or a sign that says, explosion and fire at nuclear point, Indian point nuclear reactor, Tappen Z closed.

Again, it’s, it’s societal disruption. It’s losing faith in the integrity of our infrastructure. it’s raising the level of adrenaline, people who are worried, don’t think deeply about things. So it’s a set of vulnerabilities. Now, in order to make something like the metaverse work, we have to have something like a lot of sensors, a lot of an analytical engines and a lot of actuators.

And all of those things are IOT devices. Which used peculiar proprietary protocols. right. I mean, Modbus was invented the same year that ARPANET went live in 1969. It was 30 years later that Modbus began using TCPIP I think it’s port 502. But check me on that. The point is that the convergence between OT and it, I don’t think will happen.

I’m never going to file my taxes on an industrial robot, and I’m not going to use my PC to drive my car, but there will be greater, information going back and forth. My car will tell me about its state by giving me something on an app. So as we introduce more flavors of technology, especially from the. Industrial IOT world and the IOT world into our, IT networks. We dramatically increase the attack surface. It’s a poser, it’s a challenge. It’s a hard problem. It’s not a complex problem. It’s just a hard problem. We need long list and check them off one at a time.

Nathan: Absolutely. And then the one that I always hear about, because it’s the closest I think to us right now is the bodysuit, right? Like if you’re wearing a tactile bodysuit, that gives you that tactile sensation and you can move around, you don’t want that to get hacked. Right. You don’t. And what kind of biometrics is that kind of suit going to collect? Yeah. and who gets that, gets those biometrics and how can those biometrics be exploited?

That’s very, very daunting, but not to scare people too much away from, from what the metaverse, could bring.

Hey Everyone, join us for Part 2 of the Cybersecurity and the Metaverse series, where Bill elaborates on the inevitable dark verse, and how we might address it. See you there.


Nathan: Welcome Part 2 of our series on Cybersecurity and the Metaverse. Last times if you missed it, we talked about the possible threats as the metaverse grows from current AR and VR technology, to a fully realized metaverse, where Bill envisions something like Star Trek’s holodeck. Check it out if you missed it. Now let’s continue our conversation.

I do want to get into some of the other aspects of kind of illegal activity, that might be facilitated by a metaverse, things that we, might not have thought about, like, like money laundering initiatives and how does the metaverse facilitate that or, what we call the dark verse and, and different activities there, like drug trafficking, illegal file sharing, different scams. So could you talk a little bit more about that and, and what kind of, other legal activities could be facilitated?

Bill: Well, the dark verse will come along at pace with the metaverse. Remember that the laws always are five to 10 years behind the reality. , there was a person hacking the phone company in California in the nineties and the FBI tracked him down and they found out. He was a 15 year old kid in the Netherlands now, now at that time. Holland did not have any laws about hacking. And so with the approval of the state police in the Netherlands the FBI went to the kid’s home and talked to his mom. who said, you really should not do that. Heinrich, you know. And so that’s what we’re faced with.

There will be, I mean, yeah, it’s still going to be the same stuff. It’s still conversion. It’s still theft, but it’s using really weird media once upon a time. , if you stole something, you could be guilty of wire fraud, but what happens if you take data and then just give it away? You get some information and you just publish it.

You don’t make a dime off of it. So technically it’s not theft, but it’s still disruptive. And, and the laws have to catch up. Same thing with the metaverse the case six years ago about, Jordan Bellamir being, groped in a virtual environment, that’s chilling. I mean, she doesn’t even have a body suit and she felt violated.

I don’t see any way that laws or regulations can proactively intercede, to provide safety and trustworthiness to people entering. It’s going to be caveat enter until we get enough body and knowledge that we know what matters. And unfortunately the laws will, will then have to follow. Now, there are things we can do right.

Increase awareness among people, especially young people about what the risks are. We can improve the security and put baseline security for IOT things. Any device that interacts with a human being has to require a user idea and a password would be nice, right? Something like, the basic stuff. I, I don’t want to bring in full blown security architectures – simple stuff, multifactor authentication, even, even that would be help.

Nathan: I mean, you touched on law enforcement a bit and how there’s going to be a delay there, but there’s going to be a wealth of issues as well of, in terms of how is that going to be? How’s that enforcement going to happen, right? I mean, what are the issues around jurisdiction – who controls, what, who enforces, what – what kind of challenges do you think that law enforcement will have if they try to, when they do try to intercept any of these illegal activities?

Bill: Well, the first one you mentioned is jurisdictional, and that is, the, the people who would, steal stuff and then hide, on a server used by a newspaper in Britain, which has extremely strong, controls for freedom of the press, you know? And then they’d bounce it off a server in Japan, which has exceptional rules about search and so on.

So yeah, the bad guys already know the landscape because they’re using it now to trafficking, guns and drugs. Although in a research report that we did a few years ago, we found that. Oddly, although in retrospect it makes sense. There’s almost no traffic in illicit weapons in Northern Africa or the middle east because they’re, they’re so available. Youjust, just walk down this street and for a case of beer, you get an AK 47. So we, we don’t see a lot of traffic. It’s, it’s a cost benefit thing. but yeah, and those that infrastructure will be used to facilitate, crime, but I don’t think the bad guys are going to create, a super fortress.

They have more than enough to get doing what they’re done. for instance, just to draw an analogy, ransomware is a horrible plague business, email compromise is orders of magnitude bigger. And, and I mean that literally you don’t hear about business email compromise because if a company. Gives money to somebody and then finds out it was robbed.

It’s not a big newsworthy event, but if you can’t get gas in your car, that is well, the point is that the people who are pulling off those hacks aren’t super criminal geniuses and they aren’t using zero day exploits. Most of these hacks are based on problems that have been found and fixed two years ago or more so basic cyber hygiene regulations, demanding adherence to a code of standards for interconnection, among different types of devices.

That would go a long way to it. The metaverse will be scary and there will probably be something like an opium den in a version of the Hoak in 20, but we’ve got a lot of work to do before we start worrying about, people being addicted to virtual media. Oops, there’s my Facebook.

Nathan: Yeah. Well, I had a question here that you’ve, you’ve touched on quite a bit now, but like what measures, do you think there are any measures that can be placed to avoid the formation of this dark verse, right. That we’re, that we’re talking about, and you’re saying it’s kind of bound to happen in some form or another, but these very security measures, educating younger populations, basic cybersecurity, hygiene is that going to be enough or, or is there anything that we can really do to, to, to really prevent or delay the formation of this start first?

Bill: Well, here’s where the jurisdictional issue really gets big governments have different philosophies about. The concept of prior restraint. I had the pleasure of interviewing Floyd Abrams, he was the attorney who defended the New York times in the Pentagon papers case. The Supreme court agreed that you cannot prohibit the publish publication of something because it might have secrets in it. Different societies set the bar at different heights. There are some societies where certain kinds of activities are categorically excluded, even though they have not happened, and that means that, if a criminal in this jurisdiction manages to get into that jurisdiction, then there’s going to be no extradition because in that jurisdiction, what they did was not considered a crime.

Right. And that’s going to put additional stress on the cooperation between nations to try to flatten this. I mean, certainly at this moment in history, we have blocks that are in a cyber sense at each other’s throats. , I would like to, think back to the optimism of 1991 the wall was down and people were writing songs like, waking up from history. The end of history was a great book. The, the point is that we could get there, but we have to figure out how to get there pretty much all at the same time or in stages. I mean, you have to lock on your web browser. It doesn’t really tell the truth all the time, but at least tells you that the site is using secure transmission.

Maybe we’ll have something like that for the metaverse world, in this, in this domain. The following gesture will cause anybody within six feet of you to back off as a way.

Nathan: Right. Right. Which is what I think they implemented with that case for Jordan Bellamir immediately, something like that. Yeah in my naive mind, really just thought the, the kind of avenues of security and protection would come at the server level and, the servers that are hosting the metaverse and then also the gear that is used to access the metaverse. Yes. And those are the two points that I, that I immediately just think. Okay, well, aren’t those the kind of key areas that we need to be implementing some type of security measures to prevent. servers hosting a dark first type environment, or, or, hacks happening to the gear that people are using to access it. Does that make sense?

Bill: Absolutely. That’s the foundation. I would stretch it a little bit. I would say that we also have to pay attention to the network itself because we can look at the traffic on the network and identify indications of compromise. don’t know where it’s coming from. Don’t know where it’s going to, but this here traffic will mess up somebody in the metaverse right.

That’s one, one thing. Another is I’d like to take servers and generalize it to not just the physical servers and a particular place, but to cloud-based environ. Have something to look at the kinds of traffic that’s going on there. , on the end point stuff the headset I’ve already got add-ins to my browser to do things like block ads and filter spam.

So I would like to have an add like that to my AR VR goggles, that intercept and null out these, fraction of a second images, you smooth it over. We can do that. That’s not a hard problem. So there are ways to intercept and block this stuff, but you’re right. The two control points are going to be the device that I’m using and the platform from which the information is coming and to where it’s going.

Nathan: Interesting. And, and that’s also where the efforts might be focused for the bad actors as well to breach. Yeah. If I’m not mistaken. Right. So that’s just the same old arms race of, of, we got to put it out, put out security measures and then they’re going to try to break through, right. Focusing back on the user, and kind of, how user activity can be controlled – is there – who will be responsible for the kind of bad actors in terms of just in the metaverse like, I mean, you said, we could have something that might block out, artificial kind of advertisements or whatever, using your VR goggles, for example. but people, people that are in that space of things that you wouldn’t do, normal society.

But now you’re in the metaverse you feel like you have this power, you can do whatever you want. how are we going to be moderating that user activity, speech do we give control to the users? They have an action that they can do to push people away. Are there going to be moderators? Is there going to be a way to automatically ban people, what’s are all of the above, what should, what should happen to really kind of protect the people in that?

Bill: Well I mean, certainly, intelligent moderation would be nice. , the efforts by the major social media companies to deploy that technology have, speak polite and say just fallen short right. I mean, the, the council of elders at Facebook that was to determine what is, and is not, an appropriate speech. I was blocked for a week because I reposted a picture on Facebook and it was like wait a second, that guy put it up. I just reposted it anyway. So there’s still a few bugs in the system to quote Gary Trudeau.

The reality is that we will need some way to determine content in real time, and then you start talking about a massive amount of AI and the problem with machine learning as a flavor of artificial intelligence that can do this. The machine learning training set can be polluted and we’ve seen demonstrations of the feasibility of attacks like that.

Just as an aside, my company has over 30 years of data on vulnerabilities an nobody else has it. And it’s on our servers – it is exceptionally well protected and constantly validated because we want to have a training set for applications as yet unknown. And we don’t want somebody smarter than us. And there are people smarter than us out there figuring out a way to, plant a logic bomb that we completely miss. I would like to see social media companies take, say one, 100th of 1% of their annual profit and create a training zone. So, you turn 13 and now you can get onto this platform.

Well, before you get onto this platform, I need you to spend 15 minutes just going through this. You’re going to see some things and I’d like you to be aware. I’d like you a heads up. I think a responsible company should train its customers in the use of their technology. Now, again, the underlying question is who’s the customer.

If you’re Twitter or Facebook, well, it’s the person who’s paying the bills and who’s paying the bills. Will it, it isn’t me logging on looking at my feed. So we need a, a way to tweak the model and say, no, okay, it’s okay to sell cars, but we need a social mechanism. Make sure the people who are driving the cars know what they’re doing.

And we need a system of laws to say, you know what constitutes appropriate behavior. In fact, earlier today, I got a message from Facebook asking me to give them a peak at either a passport or a driver’s license to verify something so that I could continue to comment on political issues. Bravo. I mean, that’s great. I would like people to authenticate themself before entering into a multiplayer environment. I would like to know that the sponsor of that multiplayer environment can assure me as part of the terms of service as part of the end user license agreement that I agree to strong authentication before I participate.

I’d like them to say, in fact, we were it that the people you’ll deal with can be identified like that,  person who assaulted Jordan get back to him, find him, don’t let him back in the platform or, or them, or if it’s a bot and plug it or whatever.  

Nathan: Do you think we’re going to be asked to identify all the stop signs in a photo to make sure that we’re not a bot or AI or I mean, it, it ties into that, right? I mean, just having some kind of verification process there, some kind of identity that’s tied to you, to the user – so that there is some kind of accountability that can be that you can be held to. So if you do cause something do you think it’ll go in that direction?

I mean, your example of Facebook’s, asking for some, some my edification there, so you can comment on political stuff. , I mean, that’s a great example, right? Cause you, if it continues to go that way for large audiences, is that a vulner, like, is that a vulnerability? Is that somewhere that bad actors would be able to exploit that you think they’d be able to exploit

Bill: It depends on how the stuff is implemented. Could I have passed them a fake ID? I don’t know, did try. I wouldn’t take the risk, but I imagine as a research project, it would be interesting to see what happened. If a white hat actor were to create a persona and then attempt to do something political, how, how hard would it be now? I’m not advocating breaking a law.

I mean, there are stories about reporters trying to sneak bombs onto planes to test the efficacy of the TSA. that’s wow. That’s a little a little far fetched, but God bless ’em. I mean, I, I love reporters who do research. , maybe they should have taken the 200 level class though. The point is, yeah, we can put other measures in place that won’t be terribly onerous.

Having people identify that they are who they say they are as a condition of entry into a thing is great. Having some kind of an anti room where you can do training having some kind of a message board that tells you, here are the kinds of things we’ve seen that have gone wrong lately, keep an eye out for this. It turns out it’s a scam. Those kind of warnings would be nice.

Nathan: Next time we wrap up the series where Bill sheds some light on data privacy concerns, home security, and regulations. We’ll see you there.


Nathan: Welcome to the final part of our Cybersecurity and the Metaverse series. Check out the last two videos if you missed them where we discussed threats, vulnerability, law enforcement, and moderating user activity, in the metaverse. Back to the conversation.

Is there a way, cause we’re, we’re talking about all these different measures before a user kind of accesses the, the, the metaverse enters into. Well, once you’re in is there a way to safeguard privacy, cause you’re in this server, you’re in this metaverse, that’s potentially owned by a couple large companies and they, what’s stopping them from, hearing conversations, your activity.

If we have, if we have biometric suits, if we have suits that collect different data on us as well, is that data going to belong to them? , is there going to be a way to safeguard against that type of like data privacy people’s personal information? , when there’s a, when there’s a thing like the metaverse that exists?

Bill: Well, the, the social media companies have by and large been on the side of, if you enter into this universe, we have the right to everything. I mean, I love how every few years that email goes around saying, as of Friday Facebook will have the right to use all your pictures. Well, they got the right to use all your pictures when you signed up for Facebook, 15 years ago.

So don’t, don’t believe that stuff, the problem is if that information is used in a way that’s harmful – there are so many examples that come to mind. So you wear a Fitbit and you wear a ring that keeps track of your blood sugar level right? And so you go to the drugstore to pick up your, medication and on the way you pick up three stickers bars right now, does your insurance company working with the pharmacy?

Note that, and you bought three Snickers bars. Is that going increase your premium because you bought it while you were there to get your medication. I mean, the possibilities for, abuse, look what we did with, genetic predictors of potential serious health problems. Employers can’t use them to make decisions about employability ensures are prohibited from using them to determine eligibility for insurance.

Well, that’s a step in the right direction, but maybe the point is that human beings are dramatically different from one another. And rather than start out by saying everything you do is mine. Maybe we should adopt a position that was more like: I’m only going to use stuff that you allow me to use. I’m only going use stuff that, what it’s going be used for, and you have the unconditional right. To be left alone. Now, if that sounds like the GDPR, it kind of is, but I am actually referencing a Harvard law review article from 1890 called the right of privacy. Right. That article came about because there was a new technology that showed up.

It was the camera that was portable, right? So you could be lurking outside the say no more bar and saloon and catch a picture of the Reverend Dr. Smith staggering out of the bar Saturday night. And there, it was on the front page of the, town journal on Sunday morning. There’s a privacy issue, a concept that had never been thought of before, because there was no technology prior to that, that would allow me to capture an image or a movie or a person’s complete suite of biometrics that could then be used for any one of number of purposes, political action, embarrassment, selling them something, blackmailing them, coercing them and doing something you don’t want.

We do need to bear in mind that there are privacy risks. Assumptions of trustworthiness, are unfounded until proven. We need to be able to train people to be aware of that. I can’t stop bad people from driving. But I can put stuff in place like street signs and street lights and driver’s training and car insurance regulations and police and crumple zones and seat belts and all of those things to try to make it safer.

Even though there are people who are not as good a driver as I am. And I think of myself as a better than average driver, which means that most of the people on the road are worse than me. So yeah, my level awareness can be here, but I want protections for the people that aren’t functioning at my level so that they can also be confident that they’re doing something safe and they’re not given away the family jewels.

Nathan: Right. Right. Well, with that analogy though, we’re talking about the drivers, but if we’re talking again about the roads. And the people that decide about the, the stop signs and the, the regulations. I mean, that’s, meta or Facebook, I guess this, this isn’t a new issue, right? I mean, this, this privacy issue is not a new issue. So it’s just going to have to extend into this new metaverse and the new type of information that we’ll be able to that we’ll be giving out when using these services. Do you think there’s going to be any kind of change in that because the amount of information that we would be giving at a fully realized metaverse, which is essentially, you can imagine a scenario where you’re meeting people online, you’re having conversations, you’re doing all of that is now, it’s like you’re inviting those companies that are hosting the metaverse, you’re inviting them into your truly your daily activities and conversations.

And on top of that biometrics, if there’s bodysuits and that sort of thing. You think there’ll be a need to, to really kind of tighten up the laws? GDPR is great, but is there going to be kind of a next level of that for Europeans?

Bill: I, I think there, there has to be and it will be regulatory. It’s not going to be voluntary and it’s not going to be driven by some large, economic force from outside changing things. It will have to come about as a result of legislation, which will come about as a result of some troubling occurrences, which point to the fact that there needs to be regulation or is always reactive.

It is all, it always follows what it is that people do there. If I were running a global law firm or a multinational bank, I would not have my merger and acquisition meetings. In the metaverse I’d use a trusted line from a secure location to carry on that conversation, and the degree of diligence has to be proportionate to the risk involved.

People don’t take themselves very seriously until they do. There was a story about the RSA conference in London many years ago, where people would get a $1 candy bar if they gave their user ID and password to a person standing on the street. And people did it – now, maybe they all lied. I don’t know. Right. But if they were clever, they would. And you get the candy and the jokes on them. But I have a feeling that some of them probably gave up the store for, one 30th of a piece of silver. Yeah.

Nathan: If we just to change gears a little bit and go back to kind of the, the current AR and VR tech that is approaching something akin to a metaverse. Are there any types of AR and VR tech that you think will benefit the most from the metaverse? I mean, I’m already thinking about, medical professionals and, and kind of the robotic surgeries that doctors are performing remotely, is there any other kind of examples like that?

Bill: Well, you’re on the same track. I am, I think also that, you know medical devices, diagnostic procedures, surgical procedures, being able to be done remotely again, entrusted environment using verified tools with, really is a doctor in place. Then I, I think that could be a tremendous benefit. , we have, I, I was listening to a presentation from over a decade ago by Marvin Miske. He said the, the most significant health hazard in human behavior is the fact that people still shake hands that’s the greatest way in which diseases get spread.

So in the metaverse you would never get a disease from shaking anybody’s hand. , virtually. So there, there are tremendous benefits to being there, the cost of travel the ease of being able to get to get to things. Yeah. I’d love to be able to have important meetings or high level summits without having to get on a plane and go, 14 hours and then wait forever for my bag to show up at, the reader JFK or wherever I land. Yeah, there are, there are tremendous cost savings and great benefits, education and training. , I have a friend who’s involved with a charity in Sub-Saharan Africa, and part of it is training in basic hygiene. Well, if she did not have to fly there, she could probably reach a lot more people and get the word out a lot more quickly and effectively, cause she’s already trusted.

She’s already known and this would just amplify her effectiveness. By giving her that presence and being able to get reaction from the, the folks nearby is they try to, make a pot or reassemble the water filter or whatever it is. Yeah. There, there are tremendous potential benefits. We have to, we have to figure out how to do it safely and secure.

Nathan: Yeah. And, and talking with my colleagues recently, and we were talking about the metaverse and it just seems like such a, something that’s so far out into the future, but then we look at, current examples of what we do have, which are smart home devices, things at home, Google, we’ll forget what it’s called a little Google pod or Alexa, or what have you. Does having, does is home security affected at all by having, this, this rise of smart home devices?

Bill: It is. I was visiting my uncle in Florida. Last month. And my daughter drove up she’s down in the, in the Miami area and he’s up in the Tampa/St.Pete area. So my daughter drove up it’s over a three hour drive. And on the way there, she got a note from her roommate who said, I left a day early and she texted back to her roommate. Did she leave the key under the mat? So her boyfriend could get in and feed the cats cause they were going be gone for four days. He’s like, oh, I’m already four hours down the road. Don’t make me go back. And so we’re problem solving this. It’s like, well does she meet her boyfriend half so that each drive an hour and a half to hand over a piece of metal, is there a way we can make a copy of the key and then FedEx it there?

And then my son said, there’s an app that you can scan a key and make it into a pattern for a 3d printer. And we all started looking around. It turns out that there’s a company that has kiosks in places like the public supermarket, where you can bring in your key, put it in the device, just like you would at a hardware store to get a copy of the key made, but it sends the plans to another kiosk.

So she went to the Publix in Venice and sent it to him and he got the code to go to a kiosk at a Publix on the north side of Miami and he got a key and it worked now, now to my small mind, that is like 85% of a transporter. I’m thinking of star Trek, like, geez, this is, this is amazing. Well, now I’m thinking, well, there’s, there’s a virtual copy of the key in the network.

That’s right. But it’s like, I think I can live with a risk. because they did require, a code from her and there was two factor authentication at his end. And, what’s, what’s the risk that a student department in, in somewhere in Florida gets broken in. Well, I don’t think it’s dramatically increased because of this one transaction. So yeah, these little things are going make life a lot easier. I mean, just think of the gasoline savings alone. I was ready to lay out, 40 or 50 bucks for a, overnight FedEx weekend delivery thing. So yeah, it, it,

Nathan: This is kind of a one time occurrence. But if this becomes a common thing where we’re literally sending our digital prints of our keys to this company that becomes a vulnerability at some point where if it’s a, if it’s commonplace, I would, I would assume are the example that my friends and I always use is, the smart toaster, how secure is that smart toaster that’s connected to the internet?

Can that be exploited to access my, literally my, my internet and, and, and, anything that’s kept on my computer or anything that’s connected that way is, is that a reality? Is that, is that something that Could occur. And, again, the exam, the more advanced example of like a, a Google or an Alexa.

Bill: Absolutely. My older daughter refuses to enable the voice command on her TV. I have it, but I don’t have any Alexa gear and I’ve turned off the mic on my remote for that TV. I did a survey – I have some technology that takes a look at transmissions across all bandwidths. And I did a survey and I found that I have 13 devices in my little apartment that are communicating over half a dozen different channels. I’ve shut down some of them. Some things come up with Bluetooth, automatic. Some I can’t – the thermostat will talk to the internet and it knows where I am and it adjusts the temperature. Not only based on time of day, we’ve had that for a long time, but based on whether or not I’m here. If it thinks I’m gone, it’s going to drop into eco mode, save me some bucks, but anybody who can tap into it will be say, ah, this device thinks that Mr. Malik is no longer in his apartment.

We strike at Dawn. Well, for me, probably not. I mean I write stuff and give talks, but some of my colleagues are involved in, hacking the dark web. They’re participating in activities in the cyber underground. And they would probably not want to have that level of potential visibility in their environment. So, yeah, it’s, it’s a risk. It’s a trade off. I do keep stuff turned off. I’m a little more paranoid than most about that, but that’s the solution that’s like putting a lock on your gas cap. I mean, if the bad actors really want gasoline, they’ll punch a hole in the tank, right? Yeah. But if they’re in a hurry, they’ll just go to the next car. That’s got a gas cap. That’s not locked. So that’s the, the kind of measure it’s this theory of crowds.

Nathan: So when it, when it comes to security measures or cybersecurity measures do you think if, if the internet becomes a kind of a national basic utility, like water or power, how does that change the conversation around these cybersecurity initiatives? Does it, should we see an increase in, in, in positive regulations or does it actually increase the vulnerabilities now that it becomes this national service?

Bill: Well, the fact is that it really kind of is whenever you see discussions about critical infrastructure, internet, telecommunications are always way up high on the list. What would help would be if the federal regulations governing the security of physical mail were extended to internet based transactions involving either personal information or significant amounts of value. Right. If I get somebody to send me something, they shouldn’t send me, I’ve just violated federal law.

And that means I don’t have to, I have to leave the country to get out of the jurisdiction. We had something like that, and right now the government’s moving very slowly, but they’re making steps. The FCC just recently went after a major generator of spam, something like 20 or 30% of the spam traffic was from this one company. And he told them to stop. And the FCC also told the carriers, if you continue to do this, we’re going to pull your license to do interstate commerce and communications. I was like, whoa, okay. Without that, I’m not in business. And so they, they shaped up. So yeah, I think, I think that kind of thing’s good. I have no patience with people who say, the right to yell fire in a crowded theater is it? There are, there are limits on freedom expression and rightfully so. We’ll get there.

Nathan: The, the way that question came up, was talking with a colleague about, if they would be forced, for example, to be connected to the internet as a result of this kind of nationalization of it as a, as a utility, like you’re building a house in the city, you have to be connected to power.

There has to be water line connected to it. It’s part of the, at least in Canada. If internet becomes such a utility as well, which, in part is, is governed and regulated by the government, but also it creates these obligations on our part. Could we opt out, should there be measures in place that we don’t want to be connected to the internet? For whatever reason looking at it from that angle, you think it’s going to, do you think it, it would be in an end? I mean, you kind of alluded to it, it would create a good end result, if it had this kind of national level of regulation or, or observance?

Bill: Well, you are required to have certain protections. When you build a home, you have to have certain protections against the spread of fire. You have to have certain protections for insulation, right. , but you don’t, they don’t say, what kind of wattage your home service has to have and they set minimums. You can do more with regard to internet connectivity.

I think at this point, if somebody wants to opt out, that’s fine. The downside is it’ll impact the resale value because the next person who lives there may want internet. So they’re going to say, well, I have to do this installation. So I’m going to ask you to take a couple grand off the price. And I say, okay, the price of freedom is I, I lose a couple grand in resale.

I think that’s a fair trade. In some cases, it becomes a little more problematic. Newer cars have an enormous amount of technology in them – not dozens, but hundreds of processors, sensors, and actuators all over them. And they’re in communication with the manufacturer now. Yeah, it’s my car. But if I disable some of those capabilities, then I will no longer be able to get the car properly serviced. One of the things that I’ve been on, and again and off again, is the little fob that you can plug into your car that will let your insurance company know that you’re a safe driver. Well, it, it turns out that I don’t think I want them to have that much. So even if there’s a 5% reduction in premium, I’m not going to use it.

It turns out there’s the technical reason as well. If you do a hard stop, they consider you a bad driver A hard stop is decelerating at, I think it’s like, five miles per hour per second or something like that. What that means is you’re on the, if you’re on the merit Parkway and you’re doing 60 miles an hour and you have to come to a stop and you come to a stop and less than 12 seconds, that’s a hard stop. Well, if you’ve ever driven on the merit Parkway, if you don’t come to a stop in significantly less than 12 seconds, you’re going to rear end somebody. So in that case, it, the spirit is brilliant, but the implementation is a little off the mark.

Nathan: They missed a few edge cases there haha. Yeah.

Bill: Yeah. Like people who drive in cities haha

Nathan: Haha yeah, fantastic. Well Bill, that’s it for our questions today – did you have anything, was there anything that we missed that you hope to talk about or?

Bill: We’ve covered a, a broad range of materials and if something comes up, I’ll contact you and maybe we can do this again.

Nathan: Absolutely. Well, thank you so much for your time.

Bill: It was wonderful. I appreciate it. I look forward to seeing what your team comes up with and thank you very much for your brilliant, insightful questions.

Nathan: Appreciate it, absolutely! At home, if you like what you’re watching, be sure to check out our last video with Bill’s colleague and VP of cybersecurity, Greg Young, where we talked about the past, present and future threats and innovations in cybersecurity.

We’ll see you next time!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.