Health is more than just about the cure; at the heart of it is prevention. Having the right information in your hands could be the one thing that could prevent a health concern from growing before it is too late. With machine learning, we could pave the way for a predictive medicine approach. This is one of the main goals of Feebris, a company that believes informative and efficient healthcare should be accessible for all. In this episode, its Senior Machine Learning Engineer, Max Antson, joins us to talk about his AI-powered mission to help achieve this goal. Max shows us the bright future of machine learning, especially in the space of digital health, and how it enables the industry to make more informed and more accurate decisions based on artificial intelligence-based insights. Tune in to this powerful conversation and see the power of machine learning when used the right way for the greater good.

Watch the episode here

Listen to the podcast here

The Future Of Machine Learning – A Conversation With Max Antson

We have Max Antson here, and he is an expert in the field of machine learning. I’m very excited to talk with him because he’s using those same expert skills of machine learning and applying them to healthcare, something that I’m interested in. Max, tell us a little bit about yourself and what you’re doing.

Dr. Awesome, thank you very much for having me on. I’m excited to chat with you and everybody. I’m a Machine Learning Engineer at a company called Feebris. The thing that we specialize in is machine learning and artificial intelligence in the space of digital health. One of my huge passions is seeing how we can apply something that I’ve been studying for a very long time in my life into the real world, truly help people, make a difference, and try to pave the way for more of a predictive medicine approach rather than reacting when things are already too late. We have specialized in trying to enable the workforce to make more informed and more accurate decisions based on our artificial intelligence-based insights.

One of the examples that you gave is that there are many young children in India who die from pneumonia because their parents don’t realize that they have the disease. Explain to me at a basic level how machine learning and artificial intelligence can help in that outcome.

It’s quite a crisis that’s happening worldwide, and it especially affects populations that don’t have immediate access to a doctor. It’s something that sits with our CEO as well. It’s one of the reasons that sparked her to found the company in the first place. What machine learning can do is instead of having to go immediately to a doctor who’s the only one who necessarily has all of the equipment to record all of the key biomarkers, extract information from those, and then make decisions, they’re already overworked, and they’re having to see too many people at once, you can take that off of their hands.

They’re still going to be the key decision maker. It’s something we should emphasize that we’re not taking anything away from their job. We’re enabling them to make decisions faster. What we can do is take that technology out to, for example, child healthcare workers who may be out in the field working with these children. Those devices can be things like a pulse oximeter. For anyone who doesn’t know, that’s something that attaches to your finger and gets your oxygen saturation levels. You can get a blood pressure cuff, a digital stethoscope, and things like that.

FSP – DFY 19 | Future Of Machine Learning

These are all things that we will be having in our kit. They can then, with quite minimal training, go to the child and connect this child up to this equipment. We then have a mobile app that extracts all of the signals from those devices via Bluetooth and uses AI magic to get a lot of the insights. For example, from a stethoscope, you can get key lung sounds. There might be lung sounds like crackles and wheezes that you might hear in those audio recordings that are very indicative of something potentially going wrong and may be leading to pneumonia.

With all of that information, a doctor or a clinician can look at that and quickly see who needs to be seen immediately, who’s okay, and all of this. Hopefully, it improves that system. That’s where machine learning comes in. It’s a three-stage process as well because you can think about this and be like, “This is a cool and dreamy scenario where you can do all of these things,” but imagine you’re out there in the field, and you’ve gone to somebody’s house. Maybe there are four kids. They’re all screaming, crying, and not wanting to do any of this. They all want to play. There’s a car driving by. Someone is shouting that dinner is ready and all of this stuff.

If you’re trying to record a stethoscope recording in this environment, it’s pretty much impossible to get any meaningful information. The way that this can work with minimal training is if we focus on getting quality measurements from these devices. Our first layer is all about the quality of those recordings. We have many algorithms. We’re checking, “First of all, have we got meaningful information from all of these different devices that we’re using? If we don’t, can we provide feedback to the users as to what’s going wrong?”

For example, we might ask one user, “There’s too much noise in this recording at the moment. Can you go to a different room? Can you move the stethoscope a little bit? It’s not placed in the correct area.” That’s the key bit that is missing in a lot of the various solutions. They’re trying to do this thing. Hopefully, with that foundation, you can get meaningful insights.

The second stage is once we have the good signals and the good quality, then we can extract accurate information. One big example is this. You’re an expert in the medical field yourself much more so than I am. To measure the respiratory rate, the amount of time somebody is breathing or the amount of breath someone is taking in a minute, typically, what you will do is count the number of times the chest is moving up and down. If you have to do that quite a lot frequently in these areas, and if it’s somebody with minimal training, the chance of error is very high.

For example, if you do this across a huge population, you would expect to see quite an evenly normal distribution of rates around the average of about 16, 17, or so breaths per minute but when you plot this, you see that almost everyone is either 18 or 20 breaths per minute because someone has counted for 15 seconds instead of 60 multiplied by 4. If it seems abnormal, they shift it a little bit. This is one of the most important vital signs for predicting pneumonia in combination with things like lung sounds. Getting this right is key. This is where machine learning can help.

One of the projects I’m proudest of having worked on is extracting the respiratory rate from the pulse oximeter device, for example, which you can also do in the background. In a way, that’s also better from a psychological perspective because if somebody knows that their breathing is being counted, they’re going to change how they breathe but if they’re not aware, and it’s something going on in the background, that’s going to be the true rate that we can truly analyze and take into account with the rest of the context.

Finally, the third layer is we have good quality, we have extracted all of the information that we can, and we believe it’s accurate. We can then do this many times over time, days, and weeks and see, “Are there any trends going on in this patient that might be indicative of something going wrong? Can we predict deterioration?” That’s the holy grail of a lot of medicinal research. That’s this third big layer. If we can do that, and I believe we will be able to relatively soon already across the world, then that’s going to be huge. It’s going to change how medicine is shaped. What everybody wants to aim for is this thing.

I live in Cambridge, Massachusetts. I’m a stone’s throw from MIT. I took this class over there. That was about artificial intelligence in healthcare. They taught us a little rudimentary understanding of how artificial intelligence works. I apologize if I don’t get the terms correctly. You’re creating your dataset. You want to make sure that you have quality in your dataset first before you start going to the actual analysis portion. When you eventually get there, the holy grail is the predictive technology, which is what everybody wants, which is the ability to look at vast amounts of information and say, “This is where this trend is going.”

It’s going to be huge but what’s most interesting to me in what you’re talking about is the advent of all of this new sensor technology. When you’re saying that you have a pulse ox that is able to record a respiratory rate, that’s cool. That didn’t necessarily exist when I was in training. There are so many other sensors that exist now. For example, you have a phone, and that phone can measure where you’re going and how many steps you’ve taken. We have only yet to tap into the amount of information that we could glean from that. What is coming down the pipeline in regards to sensor technology that you’re interested in?

It’s such a fascinating area, especially now. It’s exactly what you’re saying. Funnily, you can combine the two things that you mentioned. You’re talking about extracting respiratory rate from a pulse oximeter and the phone being used as a sensor more. I don’t know if you know this but the phone has a very similar technology to a pulse oximeter in terms of how you can measure the amount of light that’s being passed through or reflected. This is how a pulse oximeter works. It’s either transmission or reflectance technology. It’s all to do with how much light is being absorbed or reflected.

People have also been able to extract respiratory rates from the phone. You can hold the phone to your face, your hand, or something like that, and you can suddenly get your vital signs. People have also played around with blood pressure. I believe work has been done on this that’s looking promising. With these, you can also get your pulse rate. It’s very similar to the heart rate. Those three vitals, you can get in 10 to 20 seconds of having your phone nearby you.

That’s something I’m excited about. I’m hearing that things are getting pretty close to being validated on that front. Suddenly, anyone in the world who has a phone can check on themselves with minimal training if the correct machine learning algorithm is being used on good data. They can get some pretty great results out of that soon. Something exciting to me is exactly what you’ve said about phones. We’re tapping into that more.

FSP – DFY 19 | Future Of Machine Learning

The signal that’s produced is called a photoplethysmogram, and it’s the PPG for short. The amount you can get from that signal is fascinating. People have also been able to extract emotions from this signal. I believe there are a bunch of other things that relate to various cardiovascular health and stuff like that. If you have all of these insights from a twenty-second recording, it’s incredible. I would go with that one.

One of the things that I heard about in this class, which I thought was so profound, is that if you’re in a Wi-Fi signal, you can have a sensor that tracks your movement within that space. For example, as Parkinson’s patients progressively decline in the amount of movement, then you know that their Parkinson’s is worsening or getting better. They can track the drug response to Parkinson’s medication by the movement when they’re in their house.

It blew my mind because when you think about it, that’s just one sensor that they’re developing. The ability for us to record all of that data, to me, is a big revolution. AI is great. Don’t get me wrong. It’s going to be amazing to analyze all that data but we’re in a new frontier with the acquisition of data. There are so many different things coming out. The ECG came out for the iWatch a few years ago. Who knows what else they’re going to come up with?

I don’t know if you know of anything else that’s coming down the pipeline that’s interesting to you but I feel like that one is going to be cool because movement throughout the day is indicative of so many different things. Depressed people don’t move as much as when they’re happy. That’s something that blew my mind that’s coming down the pipeline. That’s very similar to what you’re working on. I didn’t know if you had any insight into other things that were particularly interesting.

That’s incredible to hear about. I haven’t heard about this insight before. It adds to the excitement about this whole industry. I do believe that we still have so many untapped things. What you’ve said is also true. I believe that it’s an exponential thing that’s happening. We’re getting these sensors that have been able to be trained on data and give you meaningful insights but they’re collecting more data now that we have them all.

As a result, you get more training data, and it’s going to be more in-depth because the more data you get over a wider demographic of patients, the better it’s going to be, the more biases are going to be reduced, and all of these sorts of things. It’s a general answer but I’m excited about all of the technologies that already are in existence becoming more accurate because of the fact that they’re also gathering data at the same time. It’s a cool interplay of those two things. I’ll have a think in the background if I can think of more sensors and get back to you.

Sensor technology is something that blew me away. The ability to measure is changing at a rapid basis in all industries, not just medicine. The new telescope we have in space is able to give us more information than ever before from the macro view even to the micro view. I’m sure that there are different things that are happening on a molecular basis that I don’t even know about but it’s cool. That’s something that’s coming down the pipeline that will revolutionize a ton of industries.

Let’s touch on the machine learning aspect of it. You’re going to go out and get this data. Are there any existing datasets? You live in the UK. You have the NHS. You have theoretically this universal healthcare system that has data. Is there anybody who’s doing anything with the data that exists that you’ve heard about or maybe you are working on?

There are starting to be more pretty great public databases across the whole world. The biggest one that I know is something called MIMIC. It’s something that we have worked with. It’s something that many different companies and research institutions also work with. That’s fantastic. It’s almost got too much information because it’s hard to navigate through all of the millions of tables, text data, and signals data and map it all together to know what you need. That’s one that’s being used quite a lot.

That has everything that you could imagine while still remaining anonymous for all of the patients. It has been recorded on real patients. I don’t remember which country it’s in but it’s a huge dataset with thousands of patients. Many of them have been connected to various machines like the pulse oximeter, so it’s very useful for us, or blood pressure monitoring and respiratory monitoring. It’s done to an accurate degree. Suddenly, you have loads of signals done all in parallel combined with their medication status, history, and all of their various preconditions.

Everything here leads to a picture where you could plot out what is happening to that patient and what’s happened to them over time. Suddenly, with that data, you can create predictive algorithms and figure it out. The tricky part here is defining what is deterioration so that you can predict deterioration but if you’re able to come up with various definitions of that, you can pinpoint those parts in these various signals and see, “This is where this patient is about to potentially deteriorate. This is someone that we need to highlight to the system.”

Things can be done by anyone. Anyone can access all of this cool data. I know that there are a bunch of other datasets as well. In terms of the ones in the UK, a lot of the stuff we do is on our data. We found that there aren’t many existing datasets in the UK that you can use off the shelf for very specific purposes but I do believe that more of that will become available, especially if you use things like electronic health records. They’re everywhere. They’re abundant. There are many aspects that people can touch on and make improvements.

This is a total side topic but how do you feel about the NHS? I was talking to a researcher from Australia. She loves universal healthcare. We don’t have it here in the States. I’ve heard pros and cons. Are you a fan?

I love the NHS for a lot of things. It’s an incredible idea. Most people would agree. The execution of it through systemic issues not through the fault of the NHS but more through funding issues and things like this is causing a lot of strain, which means a lot of delays happen. The amazing thing is anyone can be seen, and that’s something incredible. The theory is that everyone looks out for each other if the whole funding system is in place correctly. I love that. It’s something I cannot support enough. I do think it needs a lot of help at the moment. It’s a system that needs to happen but now, it’s a bit too overburdened to work in the way that everybody is wanting.

How long does it take for you to get a doctor’s visit? If you were sick now, would you be able to see them the same day or the next week?

It can be a real struggle, honestly. I’ve heard stories of friends. One of my friends had a problem with his knee. It was a minor issue at first. He went to see somebody, and then they sent him away and said, “Come back in 2 to 3 months.” The problem only got worse in those 2 to 3 months, and he couldn’t contact them at the time. By that point, it had gotten pretty bad, and they were like, “You’re going to need potentially some treatments and surgery.” They booked that three months later.

The surgery unfortunately didn’t go fully to plan as well. He had to go again 3 to 4 months after that. He was in trouble for over a year in the end. It can be pretty bad. This is one of the worst cases. I wouldn’t say this is always the case. I don’t know if everyone knows this but the UK is known for at least between us that the NHS can be very slow unless you have an emergency.

Those problems exist in the US too. It depends on what exactly you’re looking for. There are plenty of people who have delays with their medical care here as well but it’s nice that there’s insight from different systems as well, especially for somebody who’s trying to make it better with the technology that you’re providing. Coming back to machine learning, let’s zoom out and say you have all this different data and all this artificial intelligence that’s happening.

Where do you see the end goal for this? Is it more of a triage system where the worst cases get flagged and then see a physician? Is it more of a decision-making system where a physician can look through charts more efficiently to give the person the care that they need? The amount of people who see me for cases that don’t need surgery and that are not emergent is pretty significant. I wouldn’t say it’s enough that I would need something to help validate that. Tell me what the end goal is for all of this.

Our mission is nobody should be suffering from treatable conditions simply because they don’t have access to a doctor. The end goal would be to have a system where anybody is able to use our technology from the comfort of their home or wherever they find themselves to quickly get an idea of their health status and then immediately get help if necessary or if not necessary yet. This is the predictive part for somebody to be able to track that and for an alert system to be in place that’s detailed enough but not too detailed to the point of over-complexity. A clinician who’s monitoring, for example, or a bunch of patients in a ward can very quickly see, “These are the people who I need to see now.”

It’s a combination. Overall, it’s helping clinicians to understand where they should be spending their time with which patients and where they can make that difference and have the right people in that moment. I believe a lot of it should be before it’s too late if we can make that very predictive. A high alert should be set for somebody where it’s predicted that something is going to happen before it’s happening.

The convenience aspect is also something interesting to me because I have a kid. Taking her to the doctor is an all-day affair. For most of the time, it’s not something super necessary. With the advent of telehealth, is it necessary for me to go all the way to this person’s clinic? I personally don’t think it is. The main thing that they’re doing is they’re doing a thorough physical exam. Hopefully, sensors will be able to provide better information and more accurate information.

The convenience aspect is real. I haven’t found any companies that are doing that. There’s a company here in the US that is trying to outsource pediatric visits where they send you a stethoscope and certain things. You get on the call with somebody but they do require you to do the otoscope exam. Unless you’ve done it 100 times, you’re not going to get a good reading. Once sensor technology progresses, then that’s going to be an interesting environment because most parents would love to not have to go to the doctor. In Third World countries, they might not have access to a doctor. How do you feel about convenience as part of the business model?

It’s massive. You’ve touched on an important point there. I’ve mentioned before the whole thing that nobody has done the otoscope measurement before, and they’re like, “How am I going to get this right?” They’re probably not going to get this right. That is inconvenient because then, it might seem that it’s convenient for them to do it from their home but if you get bad data, and the clinician who’s on the other end sees this, they’re like, “This is useless.” They’re going to have to come in anyway, “I can’t act on this,” which is potentially very bad if they miss identifying something because they had bad data.

It’s almost drilled into our model that it’s also more convenient because we are trying to train the users through the use of our product and how to use it in the best way to get good data. If it’s bad data, we wouldn’t send it to a clinician. We’re like, “Don’t look at this. This is useless.” We try to ask the patients to retake as much as they can. That is convenient to both the clinician and the user because then, the user will get everything that they want at the end much faster than having to go in to get a proper thorough examination.

Using machine learning tools is convenient to both the clinician and the user because then the user will get everything that they want at the end much faster than having to go into a proper thorough examination.

Overall, convenience is super important. It’s quite an interesting one because what I’ve said is contradictory a bit. It’s a bit less convenient if they have to take a recording for longer because the data quality is bad but long term, it’s going to be more convenient for them to have the right results. It’s a trade-off of these sorts of parameters.

The convenience of not having to go to a doctor to get this information is huge. I’m speaking from the US perspective. The whole process of going to a doctor is such an ordeal. You go there, put your name in, and sit down. We wait for 30 minutes. Sometimes the doctor is running late, and I’m guilty of that. You see the medical assistant. They take the vitals. You wait some more and then finally see the doctor. You have to check out. From start to finish, we’re talking about three hours at least. If you could do that in the comfort of your home, that would be something powerful.

Let’s zoom out because machine learning and artificial intelligence are so prevalent in the news. What companies are you most excited about? For me, ChatGPT is cool. OpenAI is the company but the tool is ChatGPT. Some of these image software like DALL-E and other generative AI that creates pictures are also cool but to me, they’re a novelty. I use it for emails. It hasn’t broken into the search capability yet but being in the industry, what are you most excited about that’s coming down the pipeline? The best thing would be a digital assistant or somebody who’s able to manage my life. I don’t know if that exists. Tell me. What artificial intelligence or machine learning stuff is most exciting for you?

It’s going to be cliché but I do love OpenAI a lot. It’s an incredible company. Sam Altman, the CEO, is someone I’ve admired for years. When ChatGPT first came out, I don’t know how you reacted but I and all my coder friends were like, “That’s crazy.” Now, the hype has died down. Potentially, it’s getting a bit worse over time and these sorts of things. They’re investigating it.

It sparked something unstoppable now at this point in a good way. That sounds very doomsday but, in my perspective, this is something that can be incredible. Any new technology that comes out is going to have bigger potential positives and bigger potential negatives. That’s the same with pretty much any innovation that people introduce to the world. The positives are overlooked as to what this can do.

Any new technology that comes out is going to have bigger potential positives and bigger potential negatives.

How do you feel about them going private? The whole idea was to be open-source so that the AI didn’t get out of control. Moving to privacy or selling to Microsoft has mixed reviews. How do you feel about that?

It’s mixed. Part of the private part is also where the funding is coming from. I understand that they do need funding to be able to have this idea. For example, they had a lot of funding removed when Elon decided to leave the company. For the survival of the company and to have the employees get paid, that is a necessary part. I’m not a big fan of Meta at all but I do like their open-source approach. I would hope they do move to that. I do have a feeling that the reason they haven’t is there must be something that they are worried about, and they want to do some research before they expose all of this to potentially bad actors and things like this but that’s also inevitable. I’m sure they’re thinking about this a lot as well.

If it’s always going to be closed-source, that’s a problem but with the fact that it’s closed now, there are potential reasons that they’re not necessarily wanting to discuss yet. I trust that for the time being but what I do like is that they did introduce this technology to the world before it was crazy. I remember listening to an interview with Sam Altman being like, “If we would have released GPT-7 without having shown anyone any of this beforehand, they wouldn’t know.”

They haven’t tested what it’s like in the world, what people will do with it, how people will respond, how bad actors will perform with this thing, and all of this stuff. That’s something I love. They were the pioneers of releasing this and at least getting people to test it, use it, and benefit from it in so many ways. I use it on many days when I’m coding. Quite often, it’s an incredible tool. Overall, I’m still optimistic that what they’re doing is on the right track but I’m monitoring.

Do you feel like the pessimism around AI is overblown like some of the existential threat terminology and that kind of stuff?

It’s understandable why people would feel that way. If I didn’t know about the field at all, I would probably be a lot more worried. I’m not going to necessarily say it’s overblown because we should be concerned about all the various things but what upsets me is the fact that we always have a negativity bias that’s massive. Nobody is talking about how incredible all of the things can be. This is an incredible piece of technology. Everyone upset about it is also using it every day, or many people are. They’re using it for the benefit of themselves and maybe their companies and anything that they’re working on themselves creatively or otherwise. The opportunities are endless.

One of the big things is it’s going to potentially change the job market pretty significantly but I am a firm believer that anytime the job market is disrupted like that, people will find new work to work with the technology instead and make the technology work for them. Hopefully, a lot of the stuff that they don’t like to do can be automated, and then they will be able to do potentially higher-level stuff that they want to do with more ease with the fact that they have this personal assistant, for example, that they’re hoping for. To go back to your question, I’ve already answered it. The thing I’m excited about is the personal assistant that I believe OpenAI is on the way to creating.

Any time the job market is disrupted, people will find new work to work with the technology instead and make the technology work for them.

One of the things that you touched on is that people will start using this technology, and it will provide more fulfillment to their lives because whether it’s medicine, coding, or working in retail, there are a ton of things that we hate doing about our jobs. If AI can remove those things, then it becomes a fun experience. You do what you love about your job, and you make the AI do the stuff that you don’t love. That’s my hope. That is the best-case scenario but also the most probabilistic scenario because it tends to shake out the stuff that we don’t like to do. We make them better, especially in the US. It’s a little bit different in the UK. You work yourself to the bone here. Everybody comes in early. They leave late.

FSP – DFY 19 | Future Of Machine Learning

Maybe with the advent of AI, there would be more time to enjoy the finer aspects of going to work. You socialize with your peers. You do what you need to do. It’s not this constant pressure to perform all the time. How do you think it’s going to change the working landscape? That’s a big thing. I tend to agree with you because it’s something that is going to make us better. It’s going to have a profound impact but I don’t think it’s going to replace us. How do you feel about it?

A lot of people will have to be conscious, myself included, as to how to adapt because I do think people who will be doing their job alongside using this technology in a lot of sectors, not every sector, will be doing a lot better and be producing more meaningful work than people who do not leverage this technology at all. That is going to be a major aspect. If I were to start looking for a job or trying to understand what career I was to go into, I would be using this technology as a bit of a personal assistant all ready to help me with things in whatever craft I’m thinking about.

That’s a big way that it will change. It’s going to be this personal system. For example, it’s like having a junior data analyst by your side. They have released Code Interpreter. It can code up. You will give it a dataset, and you’re like, “I would like to get the distribution of some form of demographics.” It will code everything amazingly well instantly, which could take a data scientist a couple of days to do. You suddenly have these plots all ready for you to use.

It might shrink companies a little bit potentially but it might create more companies that do more niche interesting things and can use and leverage these personal assistants to specifically dig into these sorts of things. It’s hard to predict what will happen. Humans are naturally bad at doing that. It’s an exciting time to see where it’s going to go but I am very optimistic. I believe new jobs are going to be created out of this like managing various systems and how they operate together. Being the communicator with all of these assistants will be a huge part. Potentially, more high-level jobs that don’t exist will start to be formed.

I hope so. That’s going to be exciting, especially the interaction with conversations as opposed to typing and stuff like that. Typing is something that we developed as the best possible way of interacting with computers that we could think of but speaking is something that we have been doing for almost our entire history. My daughter is able to interact with our Google Assistant, and she’s not even two yet. That’s something ingrained in her that she’s able to interact with that.

The ability to talk to whatever computer system that you are using is going to be something profound. The more complex questions that we’re able to get answered from that computer system are going to make life more manageable. I would love to know even for myself what time of day is best to wake up so that I get all this stuff done. I’m sure that’s a very easy question if I had enough data to understand that but now, it’s not available to me. That’s something that I’m looking forward to.

We’re getting to the end of our time. I always ask everybody that I speak with three questions at the end of our talk to get a little bit of insight into what their vision of the future is but also, there are certain things that I was thinking about that I wasn’t able to talk to you about because I wanted to stay on topic. The first thing that I ask every guest is this. A lot of my inspiration for doing this show as well as giving people a hopeful sense of the future is because of science fiction.

Some of the best examples of our future possibilities come from science fiction. I think about Isaac Asimov’s robot being able to take care of some of these mundane tasks like watering plants and stuff like that. I want to live in that future. Where do you get your inspiration from? This is a very tech-heavy field that you’re in. I’m sure that there are some things that you’ve either been exposed to or continue to expose yourself to that give you a little bit of inspiration.

My most inspiring thing for myself is something from the past but it’s stuck with me forever, and it’s perpetually in my head. It’s my math teacher from when I was 16 to 18. He’s an incredible guy. He made me and my friends total nerds because we would be racing him into school by 6:00 AM, which is a bit outrageous. We would all be having pizzas during the day and talking. He would be inspiring us with all of the various interesting mathematical concepts and drilling into us that everyone can make a big difference, and you can put your mind to things and make a change.

It’s all cliché stuff but when you have somebody tell you this day in and day out who you look up to and who is teaching you so many incredible things about the world, it’s something that sticks with you forever. That’s something that has always been in my head. It’s wanting to make that vision of his future a reality. That’s a big source for me.

That leads to my second question, which I was going to ask you. Artificial intelligence and machine learning is such a popular field. If there’s a young person, and they want to get involved with that, what would they look into? What would they try to develop in themselves so that they can get into a similar field like yourself?

This is not an easy field. You have to work hard. You have to, if you’re still at school, study hard, do the exams, and do them well. I came in from the field of mathematics. A lot of people come in from the field of computer science. It’s interesting to see the different ways people develop their different thinking about problems through those different fields. Pursuing one of those two is a fantastic way in.

If you do mathematics, that’s going to be more of the theoretical perspective of how machine learning works altogether. That would be phenomenal for anyone wanting to go into more of a research field where you want to make these algorithms better, more efficient, or whatever it is but if you want to work in an industrial setting and apply these models to new and exciting problems, pursue computer science. Do projects as well. One of the best things you can do to start coding is to code, figure it out along the way, and utilize large language models that are coming out to help teach you those topics.

Something I’m doing on my job is learning a new language that I have to use to deploy various models that I never touched before. ChatGPT is helping me so much. It writes out a lot of the script and the templates that I want to use. I’m like, “Can you now explain line by line what is happening? Why is it the way that it is in this instance? What should I do here? What should I do there? Why am I getting this error?” It’s incredible. Start a project and use AI already as a personal assistant. It’s not going to be perfect but over the years, it’s going to get you far.

Start a project and use AI as a personal assistant. It’s not going to be perfect, but over the years, it’s going to get you really, really far.

The last question that I always ask my guests is where they see their particular industry in the future. I want to caveat that because we talked already a lot about healthcare, and that’s something that we have already discussed. I know that you’re a musician, and this is something that you are passionate about. How do you think AI is going to affect music? Are we going to be consuming music differently? Is this something that you’re looking forward to?

I don’t know if you use Spotify but their recommendation engines are incredible. Almost all of my new favorite bands I’ve found through their recommendation system. I know it’s already cool. That’s going to get better. For example, you’re going to discover all these bands that you will love if your connection to music is being an avid listener of new genres, new bands, or whatever it is.

The other thing for creators that excites me is the music generation aspect and the fact that you’re going to have generative models that create new tunes. There are going to be some cool companies that are going to come out. For example, you might have a little bit of an idea to start a song, and you’re like, “I don’t know how to finish this. This sounds too generic. This sounds a little bit too much like this song,” or something like that. You can feed that into an AI and be like, “Can you continue this idea? Can you slightly change this or improve this?”

There will be a cool symbiosis that comes out of a collaboration with AI that’s going to help you convey the idea that you want to convey in the way that you want to convey it. It’s going to help with writer’s block. There’s a worry, “All songs and all images are going to get generated by AI.” I don’t think that’s necessarily true. Humans connect to the creator, and that can’t be an artificial intelligence system. I have this feeling that what’s going to happen is it’s going to be this incredible tool that’s going to help those creators create something amazing that they have been dreaming of creating. They’re getting to do that faster and be even more proud of their work.

I’m looking forward to it. In a lot of the science fiction that I follow, there are artificial intelligence musicians. They’re not even real, which honestly is cool. That is going to be another offering that music will have in the future. It’s so nice to speak with you. It was an interesting conversation. To all the readers, I hope that you enjoyed this conversation with Max. Check him out on his social media. More importantly, we hope to see you again in the future. Have a great day, everybody.

Thank you so much.

Important Links

About Max Antson

FSP – DFY 19 | Future Of Machine LearningMax Antson is a Senior Machine Learning Engineer from Feebris, with a key focus on quickly devising and prototyping AI-driven solutions for complex problems. Feebris believes informative and efficient healthcare should be accessible for all, and Max is on an AI-powered mission to achieve this goal.


By: The Futurist Society