Today’s guest speaker shares about the changing machine learning landscape. Listen in as Gideon Mendel’s and I discuss why machine learning is now part of the software engineers toolbox, how 2020 will be the year of language agnostic systems and what low code systems mean for the future of data science? This is HumAIn.
You are listening to the HumAIn Podcast. HumAIn is your first look at startups and industry titans that are leading and disrupting artificial intelligence, data science, future of work and developer education. I am your host David Yakobovitch and you’re listening to HumAIn. If you’d like this episode, remember to subscribe and leave a review. Now onto the show.
David Yakobovitch
Welcome back everyone to the HumAIn Podcast. I’m David Yakobovitch. And today I have a guest speaker who I met at the strata O’Reilly conference in New York city. It’s amazing to see how in the past couple of years, everything has been about data science, software engineering and the machine learning life cycle. And today we have Gideon Mendels¹ on, who’s the founder of CometML². Gideon, thanks for being with us.
Gideon Mendels
Hi, David. Thank you so much for having me and I totally, I agree. These conferences. It’s amazing to see how kind of the industry moves from year to year. Really glad to be here.
David Yakobovitch
It’s super cool though. We got to chat a little bit offline to see that we had some things in common. I have family from Israel and you’ve traveled out there and, and been involved with ventures there. And when I was at strata O’Reilly this year, there were so many startups actually from Israel or with joint operations between Silicon Valley, Israel, New York, Israel. And I found that super fascinating.
Gideon Mendels
Some people call Israel, the start-up nation. There are definitely a lot of startups and others, both in the machine learning space, but pretty much in every space. So, and then New York is, I guess the closest city in the state. So I’d make sense to come here. I personally originally from Israel, but we’re actually fully based in New York and we’re an American company, but my co-founder is also originally Israeli.
David Yakobovitch
That’s great. I actually have a close friend who just moved from the Bay to New York as well, and he’s Israeli. So it’s something we see in New York’s the new Mecca and it’s the Mecca of technology. And so love to hear about your story with comments, why’d you found CometML and tell us about that.
Gideon Mendels
Definitely. So I actually started my career as a software engineer, I guess, about 15 years ago or so. And I shifted through working on machine learning, hands-on model development about five and a half years ago or so, and originally I was a grad student at Columbia.
I worked mostly on natural language processing, speech recognition. And then, and then I spent some time at Google doing research and. One of the most interesting things there is, Google, arguably the company with the best developer practices in the world. Every new hire or every new engineering hire starts with some kind of boot camp and can figure out how to check in code and things like that.
And they really do these things very, very well there. But then I joined a team that was much focused on machine learning and data science, and it was just kind of shocking to see how a lot of the issues I’ve seen both at Columbia. And then, in my previous startup where we built a lot of missionary models.
I also work with the big companies, so specifically there I was working on detecting hate speech on YouTube comments. That was a few years ago. So, back then, it wasn’t as hyped as it is today, but that we were actually trying to solve a big problem for everyone that’s easy to do, but it’s pretty clear.
And the team I joined was already working on this for a couple of years and they already had a model in production. And one of the things I try to do as part of my work was. To build a better model. And the first thing you do when you kind of start working on modeling problems, especially if you’re inheriting an existing one is you’re trying to figure out what people did already.
You don’t want to reinvent the wheel and that’s kind of similar in the academic world where you do some kind of literature review. You want to see what’s out there. What works, what doesn’t. And to my surprise, they had a very hard time answering a lot of those questions, so where’s the exact data set.
The production model was trained on. And what, what is exactly this production model, What’s the hyper-parameters who trained it, how accurate it is. So a lot of this kind of fundamental things you need in place in order to build a or a device in your approach. What was really hard to find. We did a lot of work and we eventually collected most of the information, but at the end of the day, we started actually from scratch because we didn’t want to kind of make sure we’re basing our assumptions and something that might be inaccurate.
And to our surprise about like a month in, we found another approach that was much simpler than what we had in production that actually I would perform that. And that’s where it really clicked. Here I am with like themed really, really smart people. And most of them have PhDs, really good at machinery and data science, Google, an amazing company to work at.
But then when you don’t have the right processes and tools, it’s really hard to bring ROI on these efforts.
And if you look at it from another perspective, it’s like these machine learning teams look a lot, like what software teams looked like 10 or 15 years ago. So, if you think of a software, we have this amazing stack of tools, anything from, testing, monitoring, orchestration, CI/CD, Versioning, you name it.
And there’s a lot of two, sometimes, maybe too many, but then you go to machine learning teams and both of them are still using a combination of scripts, notebooks, and emails. And that’s a fallback emails is usually the general fallback, but, we realized. There’s definitely a better way to do this.
And, being super excited on developer tools and machine learning and helping these bigger companies to build reliable missionary models. And we kind of found a Comet. So we’ve been around for about almost three years now, actually. And. We moved in, we change and we kinda, as we met more customers and saw them or use cases, that platform definitely shifted, but it’s basic.
We like to say that, common is a meta machine learning platform, so it’s designed to help these machine learning or AI practitioners and their teams to build machinery models for real world application. And that part is critical because research is, is great. But at the end of the day, we want to make sure what we’re building matches the business KPIs.
And the way we do it, as the platform allows these teams to automatically track and manage our data, set their code experiments models. And essentially we solve problems around reproducibility, visibility, efficiency, and loss of institutional knowledge. So that’s kinda like the short version of how we got to where we are, but, very exciting to be in this field and just like, things are moving very fast and, I’m just really happy that we get to take part of it.
David Yakobovitch
One of the shocking things that I just heard that you just shared a Gideon is about how today’s machine learning teams look like software engineering teams from 10 to 15 years ago. And they think for listeners of the podcast today, that is an aha moment or a wow, like it’s all scripts, notebooks and emails. And my question to you is why do you think that is the case? Are we a newer industry? Is there a need for maturity? What do you think?
Gideon Mendels
The thing is, and that’s what most companies get wrong and kind of where they first assess this, this space or this problem is you look at machine learning. I’m like, okay, it’s basically software engineering.
We have some code. These are people who they’re engineers. There was some data, but like let’s just apply our software engineering stack and methodologies on that. But in machine learning code is just one small piece of the puzzle. You have data, you have experimentation, you have results, you have models and models in production.
There’re so many different pieces and the tools we’ve been living for the past 30 years, essentially in softer, are designed for software engineering. And when you kinda look a little bit closer, you see that, yes, there’s definitely some overlaps. But at the end of the day, these are different processes. And for that, we need different tools and different methodology.
David Yakobovitch
And what you’re saying is just right. Someone like myself who is both an educator and a data scientist, I work with a lot of these tools and I see the industry continuing to shift and change. So I agree, every three months there’s new technology and new tools and notebooks are being productionized, but then not the only tool now. So some of these new tools and processes could be CometML. Where do you think Comet is bridging the gap? For teams to be more software engineering focused.
Gideon Mendels
So our approach has always been to be agnostic to what tool do you want to use? So, we work with any type of machine in the library, whether it’s the common ones, perch, TensorFlow scikit-learn, but even if you have something that’s completely custom that you built in your garage or in your organization, you can still use Comet.
We’re also elastic to where you actually train your models. So. Whether it’s a cloud provider, your own private cluster or your laptop. And then we also have Gnostic, whether you want to use scripts, you want to use notebooks or some kind of pipeline and mechanism. So that was our approach. The entire time.
Basically the idea there is as a data scientist, you want to use the best tool for the job. So you want to pick the right library. Sometimes you want to use notebooks when you’re doing things that are more exploratory. Sometimes you want to use scripts when you’re trying to run something in scale and you want to write unit tests and things like that.
So our approach is like pick the best tool for the job but still have one platform. Where you can see everything, you can compare your results, you can share them, you can collaborate. So very similar from that perspective to kind of what GitHub did for code we’re doing for machine learning.
If you think about, things like GitHub, like it doesn’t matter which programming language you’re using or which libraries inside it or where your code is running. So we took a very similar approach. And so far it’s been pretty successful. And we stayed like it really resonates with people because we don’t lock them in to kind of one workflow.
David Yakobovitch
I completely agree. Earlier this year in the end of 2019, as we’re moving now into the beginning of 2020, I gave my 2020 predictions for the FinTech and the developer ecosystems. And one of my top 10 trends for this year is that. Everyone is going to be moving to language agnostic systems. That exact phrase you just used that we can’t just build for Python developers, Java developers, C++ developers.
We have to allow everyone to be part of the game. And I’ve seen that as a trend because of the emergence of more APIs, more pipelines. And that’s only going to grow and that, and that’s something that a lot of companies are not thinking about. Yet you look at job descriptions, you look at engineering organizations, they say it has to be Python, it has to be R but there’s something more to it.
And so I applaud that, that you’re language agnostic there in the sense though, of language agnosticism and what languages are popular. Where do you think some of the big languages are at, are there emerging or maturing for machine learning and data science moving into this new year?
Gideon Mendels
Definitely. So I completely agree with your kind of prediction there. If you kind of limit yourself to one language or one library, one workflow, What happens if the best new things comes up in your language, you’re not, you don’t know how to use, you’re going to be left behind.
But, generally speaking so far, Python has definitely been the most dominant language on the machine learning side of things. We still see quite a lot of R users. Mostly those were the more traditional statistics background, but we also see people training models and things like Java.
So especially like the companies. So, whatever our biggest customers are doing training models and one of the biggest scales in the world. And it’s just. From an efficiency perspective, from an ability to monitor ability to kind of maintain these work pipelines. It’s much easier to do with a program like Java, just because the JVM ecosystem is kind of battle tested, but we’ll start seeing more and more things in the future.
And actually, you can kind of see the emergence of low-code or no-code solutions where they’re essentially, they have their own form of programming language, if you want to think about it that way. So those will become more and more popular as we go as well.
David Yakobovitch
I definitely agree on that. Low-code and no-code, I’m looking to see that as an emerging trend in 2020, I want to take a step back to something you shared at the beginning of our conversation today, which was how a lot of your initial research in detecting hate speech on YouTube comments, inspired your work around software engineering.
And machine learning workflows. And although that was, now in the past, looking at today, we’re seeing the change in security. We’re seeing fake news emerging everywhere. We’re seeing fakes and audio and text and video.
And when of what’s occurring today, to what occurred with hate speech on platforms like YouTube, I cannot help, but see the parallels.
I almost feel that deepfakes is the new hate speech and that perhaps we’re going to see a lot of bad actors doing similar things here with those comments are audio and video. And so I wanted to get your insights on what you think about deepfakes. And if you see a parallel there with hate speech.
Gideon Mendels
Like with every new technology, there’s an opportunity to abuse it. And deepfakes is generally abusing or, depending on the use case. Gans, which are an amazing technology that is used and can be used for really great things. And you can kind of say the same thing about YouTube.
That there’s no question that you do brings a lot of value to the world, but also people can abuse it. I definitely agree. There are some similarities in the sense that we will need to use machine learning to detect those things. So, as the fixes will get better and better. It’s the machine learning will allow us to see whether it’s a deepfake, or not much better than we will do as humans.
But, I definitely agree on that point. I hope to see that there won’t be a platform that kinda, but I guess, that’s a good point. Something I probably want to spend more time thinking about. But we would need to be careful there, make sure we both set some kind of policy and also allow and build it to college to allow us to fight these kinds of things.
David Yakobovitch
And so that was something, worth exploring because you’re right. That there’s very few policies there. Of course we’re seeing the policies are starting in California, in New York, in Europe, everywhere around, “let’s stop deepfakes”. And one of my other 10 predictions for 2020 is the emergence of authentication networks.
So whether it’s called that or something else, it’s all about, how the, we. Guarantee the fingerprint, the identity of this information. And, recently I thought this is even more important, whether it’s with camera technology or in-person technology or digital technology, I was. Just reading, these digital news digest the other day, one from the popular hacker news website with Y Combinator backed companies.
And I found that there’s a whole platform for people who do shoplifting and to actually find out how to, how to discover your way that you’re shoplifting. I said, are you kidding me? This is public. This is on Reddit. What is going on, we need more authentication networks. So I’m hopeful for more of that.
If you’re listening in, please don’t shoplift. Maybe, AI technology around cameras, or maybe something Comet is doing on your platform could be helpful in that space as well. So segwaying to what your platform is doing great today, I’d love to hear about some of the specific use cases or industries that you can talk about or disclose for where are you seeing some good success?
Gideon Mendels
Definitely. Our focus is really working with machinery and teams. And when, and when I say machinery and teams, there’s some confusion in the industry on exactly what the difference between a data scientist and a machine learning engineer or an AI practitioner. So the way we’re looking at is we’re working with teams that are training models.
That’s a simple airway to think about that. And what we found is. Which is again, similar to how things happen in the code world is we’re very agnostic to the underlying, use case. So we have major enterprise customers, multiple Fortune 100’s across industry. So we have some big tech companies, finance, automotives, media companies, biotech, retail, even manufacturing.
Because at the end of the day, when you’re training these models, Of course, there’s some differences on the type of models that you choose for different problems, but machinery experimentation behaves in a similar way.
Now we do have kind of, dedicated models and the platform to look at, computer vision problems, looking at your model predictions and debugging them same from natural language processing, tabulary data and audio. But we’re not limited to a certain use case.
So for example, ancestry is one of our customers. One of the biggest companies kind of like in the genomic or DNA sequencing space and their team is doing a lot of things. One of them is actually natural language processing.
And on the other side, we recently announced a partnership with Uber. So Uber AI Labs, they develop a really, really unique, product or, or library, if you want to call it that way called Ludwig. So, Ludwig is actually a no-code machine learning library. So as an engineer, you kind of define the specification of the models without coding anything. And then you can train your model based on that. And Comet is kind of like the built-in experimentation management tool for that. So very kind of wide.
So, in terms of industries, the majority of what we see on deep learning is mostly in vision and NLP and audio. And of course in tabular data, traditional models and traditional tools, like, linear models, logistic regression, XG boost and so on, I tend to do pretty well. And, we always believe that if something works well, there’s no need to kind of overcomplicate it with a deep learning model.
David Yakobovitch
It’s so interesting. What you just shared about two things that I thought were fascinating. One about the Uber partnership with Ludwig, some of this is Davis scientists, myself, looking at deployment.
That’s often one of the most challenging things to do and it’s. Fantastic to see that we’re moving towards no-code. And earlier this year, when I was giving some AI trend reports, both at the end of 2019, and now in 2020, I also called the emergence of Davis science as a service. And that’s more the automation.
And that goes hand in hand, given with what you said about. The orchestration and the automation that we’ve seen in software engineering for many years, whether it’s with puppet and chef or Ansible or Terraform, we have all this automation, but now it’s just emerging today in machine learning.
And that’s going to have a radical shift for the industry. We’re going to have the emergence of ML ops and AI ops. Which you’ve been starting to hear about, which is like dev ops meets data science and machine learning. That’s the new space that there’s gonna be a lot of change. And you guys are well positioned for that space. It’s really great to hear the work you’re doing.
And you mentioned also about ancestry.com and I’m a big fan of discovering more about my family. In fact, I use the, one of the other platforms and, through giving my saliva spit test, I also found out that I had relatives living in the United States that I wouldn’t have known otherwise, but it’s amazing to see on, on these platforms, how they recommend to me now. Articles and images of earth and death certificates and newspaper locations that might be relevant to a family.
And so how is ancestry.com? If you can speak to it, how are they using? And they’ll peer, how are they integrating experiments with Comet to better do what? Tag information or help with discovery.
Gideon Mendels
So I really agree with your point. Whenever there’s an opportunity to kind of tap into a new data set, if you want to call it that way, as a data scientist, that’s one of the most exciting thing. Cause you can kind of try to look at that data and see what kind of insights or recommendations. Like you suggested you can kind of pull out of it.
For ancestry, one of the key things for them is they have Comet as the central place for their team to track their machinery and experiments and debug them. So that’s one of the biggest challenges in machine learning is debugging these models.
And when we say debugging, it’s a little bit different than how you think about debugging in software engineering, because these models are often black box mechanisms. It’s not about a faulty if statements or edge case that you haven’t thought about, it’s about figuring out where your model predicts the wrong results.
And often you would look at some kind of aggregate results, like something like accuracy or a loss, and let’s say your model is 90% accurate, then that’s great. Then that fits the KPI. But what happens to these other 10%? No, where are you struggling and why? So for Ancestry that’s one of the biggest value propositions in Comet is that they can kind of look into the results of the model and track predictions over time and better understand what’s going on and really how to drive the research process forward.
Which is again, one of the most challenging things, because traditionally, when you kind of approach these problems after you define them and you have the right metric. Then you try kind of these bag of tricks that everyone’s using, depending of course, on the problem. If it’s in natural language processing, there are a few kinds of transfer learning techniques, things like birth, or you would try a language model and so on, but then you get to a point and you essentially, your results kind of plateau.
And you want to understand how to push that forward. And I know for Ancestry, that was one of the biggest things they use Comet for, but also you kind of touched upon the analog side of things, The ability to. Stop, ironic model instead of SSH to a remote machine or anything like that. You look at the results. You decide that this model is not doing any good. Just you click a small stop button. It’s very simple, but it’s very kind of valuable if you’re trying to move quickly.
David Yakobovitch
That’s incredible to see again, the changes we’re seeing in the industry and how clients like ancestry.com are having those improvements today. And the improvements are we’re seeing them everywhere. You mentioned the phrase transfer learning, in essence, using machine learning to improve machine learning.
But could you dive deeper into that? I know you mentioned Burt’s you mentioned some of these models. Why are we seeing the emergence of transfer learning today? What is it? And do you think we’re there yet?
Gideon Mendels
Definitely. So transfer learning is one of the, kind of like the most exciting things out there. So just for the sake of terminology, transfer learning falls under the subfield of meta machine learning, like you said, using machine learning to improve machine learning.
And again, like everything in this industry, that terminology is not fully set. That’s why I’m kind of clarifying it. And another subfield that you might be familiar with that falls under is auto machine learning. That’s another kind of way of doing that specifically with transfer learning.
The idea is that you can use a model that was trained in a much bigger dataset because being able to like getting a good data set labeled data sets is hard and it’s expensive. So the idea with transfer learning is by using a model or a train and a much bigger data set, you can get much better results than your smaller one. So that’s generally the idea. And in NLP space, there is Berg, there is Elmo, one company that’s doing amazing work in this space is Hugging Face. And, the idea there is like you take a mother with train on a huge corpus.
Whether it’s the entire Wikipedia corporates or something else, and then you just continue training. Or find you in the weights on your dataset, and that has shown to provide much better results than just training from scratch on your dataset. There’s a lot of these models that people use there were trained on ImageNet and CIFAR or whatever data set you want to use, and they take them from there. And move ahead.
So this has two advantages. One is of course, the ability to kind of get a better result in your data set, but also saving a lot of costs. So training this huge language models is expensive. I saw the stats that one of our recent submissions from Microsoft costs to train their language model or their equivalent of Elma was about $50,000 and GPU costs. And you can download the train weights with a single command. So that’s where it’s really exciting, essentially transferring all this knowledge through your model.
But generally speaking, meta machine learning is one of the most exciting thing that is out there transfer lending among them, but they think there’s a lot more opportunities out there. And this is where the differences between software engineering and machine learning are actually working in our favor because that’s a lot of things you can do with machine learning in terms of automation. That is actually very hard to do in software engineering.
David Yakobovitch
An aha moment I just took away from there. As you said, is the differences between software engineering and machine learning are working in our favor, meaning that although machine learning is a newer industry, that’s coming to age now, it’s not bad that we didn’t have this automation and ML ops in, in AI ops in the space, but now we’ve been able to take the best practices from software engineering with the data analysis industry and combine them together.
And I love it how you just described that with transfer, learning that now we’re starting to see it. And of course we’re having many new applications coming out. Some of them more viral than others. I know, in the gaming season, there was this new game that came out recently called AI dungeon and AI dungeon 2.
And if we’re in the tech space, a lot of data scientists said, “Ooh, let me try this out”. And they started using this AI dungeon game where you get to explore worlds, kind of choose your own adventure, both on mobile and desktop.
And they crashed the servers very quick because so many people went on it, but it was amazing to see how you can just generate pure worlds of texts that aren’t gibberish. They actually made sense and they weren’t perfect, but they’re getting there and that’s pretty cool.
Gideon Mendels
I completely agree. I spent a lot of time in the NLP space and then things have been just taking off the past three or four years. It’s just very, very exciting. It’s has been the case up until like three or four years ago.
That traditional methods. The one that we had since the 80’s were pretty much on par with the fanciest deep learning models we had, depending on the task, but one example, document classification, taking a document and deciding what class it is.
So things like sentiment analysis as an example of that, you could have got pretty good results with a simple linear model, logistic regression and, and grams. But now with these kinds of transfer learning techniques, they actually managed to beat the baseline pretty significantly.
David Yakobovitch
Now speaking more about beating the baseline. When we look at machine learning models, sometimes the training takes a long time. As you mentioned, Microsoft spent $50,000 of compute time just to build one language model. But the question we often forget is it’s not just how much money goes in, but how much time from compute resources it takes.
And the challenge there is when you’re running these models, you may not have the advantage of time on your side. And so I got to play around with Comet recently, and I know you’ve talked about your new predictor tool for early stopping. Can you tell everyone what that’s more about?
Gideon Mendels
Definitely. So that’s something we’ve been working on for the past, almost two years, actually a year and a half. And we have a dedicated research team working on this. And this is where I was saying that, there’s some things you can do in machine learning that’s very hard to do in software engineering.
But essentially what the predictor does. Like you said, it’s an early stopping mechanism. But the way it works is it doesn’t treat your model as a kind of individual black box. And actually we learned from previous experiments, both from yours and from other public users or other people on your team to try to predict where your model is going.
So like you said, some of these training jobs could take anything between, two hours to two months, depending how big your model is. I’m usually a data scientist while we do is we essentially babysit these models, So you go and refresh the page or look at Comet and look at the loss curve.
And then once things look like they’re not going anywhere, or the model has converged or that the line essentially flattens in a way you stop the model. You kinda, reiterate and what you did try to figure out what the next step. And that’s essentially the research process. But what we found out is we can actually automate this process.
So by looking at over 2 million models trained with, from our public users, And looking at the models specifically from that user or from that team, we can actually stop these models early. And an average. So not on a specific model, on average, across all of our tests, we say about 30% in training time.
And exactly like you said. So one side of it is. You get to move 30% faster, and this is not in like, efficiency measurement or anything like that literally would stop your model 30% faster. So that’s one side of it and also the costs.
So depending on as an organization, how you’re looking at these things, do you try to move much faster or you’re trying to save costs, you have this ability to play with it, so we’re, we’re very, very excited about that as far, we know this is the first meta missionary product and the world excluding kinda the auto-email, which is has some similarities.
But there were some differences and we’ve had great success with that, with some of our customers that have been using the platform and integrated the predictor. And we’re really looking to see how new customers and users will use it in the future.
David Yakobovitch
That’s excellent. And 30% of the time, if we think of it. It’s so quantifiable, It’s not about efficiency, but we just look at that model of Microsoft Belt. Let’s assume that model was not in a perfect world, whether it’s 30% of $50,000, that’s $15,000.
Gideon Mendels
There’s definitely a lot of excitement from our customers using this, both from the research team and the procurement team or whoever is paying the cloud bill. Definitely.
So we completed that grant. There’s actually a lot of more work that we’re doing in this space. That’s what I was saying before. That’s something that’s very hard to automate in software engineering. So software engineering being logic is very hard to kind automate because it’s, people have tried in the past to build things like write your own code and things like that.
But there’s an issue with the problem statement event, but in machine learning, once you have this database of all these models and experiments, these circuits, you can start looking into them and extract insights from them. So transfer learning is extracting insights from a single train model.
The predictor is extracting insights across all your models and. So many things they could do in a space that I’m just really excited, that, that we get to take part of it.
David Yakobovitch
And there’s all these new features that you’re consistently coming out with as you roll out updates to the product. And I know that you mentioned, you recently announced that comment’s continuing to grow, both with funding and product and love to hear more about those growth targets or in general growth plans.
Gideon Mendels
We’ve been scaling up very, very quickly. We’ve been pretty fortunate to see a lot of success, both on the cloud side and the enterprise side. So we have over 10,000 data scientists using the platform today and kind of growing very, very quickly. We’re looking to essentially double the team by the end of the year.
So there’s definitely a lot of work on the hiring and kind of across all departments, engineering, data, science, research, marketing, sales, and so on. So that’s on one side and then on the product side, we’re really excited to dive deeper, s o our approach has been from the start instead of trying to solve all the problems in this space and build like one end to end solution that does everything.
We’re going to do one thing, but we’re going to do it better than everyone else. And that’s again, if you kind of think about the software engineer world, if you have one platform that replaces AWS, New Relic, GitHub, Jenkins, all the tools in the world, one product with one login, that’s something that’s very hard to do.
There’s just a lot of content in each one of those products. So our thesis has been, and it’s really kind of playing out so far is that this is a best of breed market at the end of the day. You want the best tool for the job? Kind of like on the engineering side and we’re going to just continue to dive deeper into there.
David Yakobovitch
Looking forward to that and that you’re absolutely right. We look at GitHub, Jenkins, New Relic, a lot of these automation and pipeline platforms. They facilitate better efficiency. They facilitate better deployment, but they don’t necessarily replace AWS. It’s not about no-code. It’s not about pure language agnosticism, it’s about facilitating access, facilitating a better workflow. So that’s, that’s excellent.
You’ve shared a lot today about software engineering, data science, and the difference between these industries. But as technology continues to become widespread everywhere. Everyone is a developer. Every company is a technology company. And as a result of this, the lines continue to get blurred.
Now when we look at jobs, it’s ML researchers, data scientists, or data analyst is the new data scientists and they seem so similar. Where do you think that’s going to be going with software engineers and data science cysts, or what? The differences between the two.
Gideon Mendels
So that’s, one of the most interesting thing you see in this industry is, the challenge of defining the titles actually say much more than we haven’t decided on the titles.
So what I mean by that is what we’re seeing is machine learning is essentially becoming another tool in the engineer and I’m using engineer’s kind of broad term here and the engineer’s toolbox. And what that means is when you’re trying to solve a problem as an engineer, Sometimes the solution is to write an if statement.
Sometimes you want to do something, maybe it was a little bit more complicated. You would write our regular expression. And sometimes you want to try and model, model is definitely not always the right way to go, but it’s sometimes so moving forward, we actually think a lot of these things are essentially going to converge.
So you would have this title of engineer. And they will be both writings softer. They would be training models. And that’s of course, depending on the underlying problem.
So as we move forward, I don’t know how we’re going to call it, but things will definitely converge. And if you kind of look into, undergraduate programs, for example, machine learning and AI becomes part of the core curriculum, so everyone’s picking up both these capabilities, they take advanced programming and they take AI. So it’s just essentially another tool in the toolbox and these people are going to do all of it.
David Yakobovitch
That is one of the key things that we’re seeing as a shift that’s always been, where does education change and how is that a predictor for industry’s changing and you’re right.
Undergraduate programs, graduate programs, boot camps altogether, everything’s getting more machine learning, software engineering, AI, all the focus is there. There’s some blending, but there will be some differences as right. There’ll be called, 10 years ago. Data scientists wasn’t necessarily a term that was used.
So in five, 10 years, we might have new job titles. And recently Gartner said that the new job for 2020, it’s going to be in the highest demand and the job title is, AI Specialist.
Gideon Mendels
Exactly. So I do think that even in the future, after we converge, I do think there are still going to be people that are doing, state-of-the-art research. And they’re only doing that.
I definitely think that that group is not going anywhere and those will continue to be the people kind of pushing the industry forward, traditionally more academics, whether they work at, a big tech company to have a dedicated research group or they’re part of a university, but the majority of the machine learning will actually be done or AI will be done by software engineers.
David Yakobovitch
Now, one of the challenges moving back towards the deepfakes and the dark side of humanity, we have this emergence of explainable AI and, everyone says, how do that we can trust the systems?
What if there, what does it just lies? What if the AI is just going rogue? And I know there’s been some frameworks of belts out there, like Lime and Shap and so forth. Does Comet support these and where do you think explainable AI is going.
Gideon Mendels
Definitely. So explainability is out open research problem and a very challenging one. And there has been a few approaches to look into that. So starting from the basics, the simpler models, or some of the simpler models are essentially inherently explainable.
So things like logistic regression, where you can look at a feature importance, the challenge is more with these deep learning models that are, tend to operate as a black box and the right tools like Lime and Shap are, for the first time, allow us to kind of look into and understand what’s going on.
So in Comet we essentially take a dual approach for this. So the first one we do support Shap and Lime. So you can use their mechanism and they’re very research oriented mechanism to understand why your model made this prediction. But the other side of things is, is kind of like a lower level of explainability, but a very useful one as you push research by looking at things.
Where is your model? Getting things wrong, maybe not necessarily why, but where, and that’s something that if you have good visibility into you can fix what where’s the why it’s sometimes very tricky. So what I mean by that, if you’re trying to, just thinking of an example here, if you’re trying to classify some examples and it looks like you’re always having a hard time would example that contain a certain word, you can go back and do the data labeling process and say, okay, let’s get more data from this class.
Or get more data that has this word in it and essentially drive the research process for it. And then you get a lot more safety because, in production time, you won’t be surprised anymore because you already looked into all these edge cases and solve them in training. So these are the two kind of main approaches people in the industry are taking.
I’m very excited to see. There’s a lot more work that people are in and research was mostly are doing on this. So I’m very excited to see where this. Industry and problem, we’ll go, we work with major banks, one of the biggest investment banks in the world. And, and they’re definitely from also from regulatory perspective are kind of committed to solve this. And they’re spending a lot of efforts on that as well. So in the next two years, I will see some new solutions out there.
David Yakobovitch
I definitely hope so. I was at this research conference focused recently on machine learning and these methods and texts, then one of the researchers gave a presentation and their presentation showed how throwing a certain phrase could completely throw off a model.
And that phrase was a very extreme phrase. So this phrase is not. Rated G for frozen or frozen fans listening to the podcast. We’re going to go to probably, the PG-13 right now. But the phrase that they added was all Americans are terrorists.
So I added that phrase, the end of any sentence, like I like hamburgers and all Americans are terrorists or like to go to the dog park and all Americans are terrorists or, I like to fly planes and all Americans are terrorists and they couldn’t really distinguish which one could be. And not safe phrase versus that.
So definitely hopeful that we’ll be improvements in this space here. And do you have any suggestions on where you think that’s going to be solved if transfer learnings, part of that, or maybe other new packages beyond Lime and Shap will be coming out?
Gideon Mendels
So there’s, it’s gotta be a combination. Shap and Lime inherently are essentially our research oriented methods to try to solve that. And they’re, they’re pretty similar in how they do that. There’s also other solutions out there. But for the example you gave is essentially one that’s very on the future important side of things.
Like there’s this certain word, like terrorist that essentially shifts the entire. Prediction because it’s such a strong feature. So the combination would be a start with simple models, the ones that are inherently interpretable, because in most cases they’re going to behave pretty similar to how the deep learning model essentially is like a proxy model.
The second thing is, make sure you look at the predictions coming out of your model. It’s not enough to look at the aggregate result. It’s very interesting to start diving in and see, “Oh, okay. This is my model struggling with this point”.
And then the third thing is. Use things like Shap and Lime and hopefully new techniques that will come out of there. So, a lot of work, a lot of more work that we need to do in this space.
David Yakobovitch
Absolutely. And taking this full circle it’s that we’re seeing is the data science and software industries are continuing to converge. New titles will be emerging. New platforms are emerging, new packages and research are merging, but there’s a big end goal that we’re all aiming towards in that convergence, which is making AI trustworthy.
And whether that’s through over-fitting or deployment or modeling, you’ve shared today, a lot of what comment’s doing to improve that for Davis scientists, research teams and machine learning engineers.
My final question on today’s HumAIn episode is whether some of your other egg predictions for 2020 that you’re seeing in the software engineering machine learning or data science industry.
Gideon Mendels
That’s a great question. A lot of teams that are currently more in the research phase will finally hit production. And that’s very exciting because once you deploy your model, putting aside the operation side of it, but once you deploy your model and you see how it behaves in production, you learn a lot about the research phase.
So closing that loop will be very exciting. We’re going to still continue to see shift in like the libraries that are used in the, in the underlying tools. You can see that TensorFlow has been kind of up until, I guess two years ago has been the de facto machine learning deep learning library, but then, Piper, which is definitely kind of picking up and you see there’s a lot of other kinds of libraries out there, whether they’re built on top of one of those or built from scratch.
And It’s very important, not to kind of get married with a single library because, at the end of the day, when you want to be able to use the best research that’s out there.
The last thing I would say is this overlap and collaboration between academia and industry. That’s very exciting because so far it’s been academia publishing papers and industry trying to apply them. And in many cases, the data sets used in academia are very different from what you see in industry.
So that convergence of being able to support both ways is very, very exciting. And last one I would say is more companies being able to essentially get real business value from machine learning and AI.
Because when we see our customers, when they’re successful, they’re extremely successful. The impact on companies is just magnificent. It’s difficult, but when you get there, it’s definitely worth it.
David Yakobovitch
I definitely think it’s definitely worth it. And, I’m excited to see where we continue to go this year and, hope to see you at strata O’Reilly 2020, wherever that will be whether in New York or elsewhere. And Gideon, thanks so much for being with us today in the HumAIn Podcast.
Gideon Mendels
Thank you so much for having me. We’ll actually be exhibiting and strata, so hopefully we’ll see each other again, there again. Thank you so much for having me.
Communicating all this information and opening up. It’s just, you’re doing something really, really great here. And, I really appreciate the opportunity to do that.
David Yakobovitch
Thank you for listening to this episode of the HumAIn Podcast. What do you think? Did the show measure up to your thoughts on artificial intelligence, data science, future of work and developer education? Listeners, I want to hear from you so that I can offer you the most relevant trend setting and educational content on the market.
You can reach me directly by email at david@yakobovitch.com. Remember to share this episode with a friend, subscribe and leave a review on your preferred podcasting app and tune into more episodes of HumAIn.
Works Cited
Companies Cited