fbpx

What Kind of Machine Learning do you really need?

 

So, you’ve got a great idea for a healthcare app. You’ve identified a need, recruited a rockstar team, and maybe even built a prototype. But now you’re starting to build the infrastructure for the veritable fire hose of data that you’re sure to be collecting from your users, and a critical question halts your progress: “Can I use Machine Learning to make this better?”

The answer is undoubtedly: “Yes!”

But how?

The vast space of Machine Learning and its associated techniques is expanding every day, so knowing how to choose between them all can feel like a sisyphean task. Do you need NLP or Computer Vision? Deep Learning or Random Forests? Should you use an off-the-shelf API or use hand-rolled models? The list goes on.

Luckily, knowing what type of data you’re collecting and what particular need you have for machine learning can make your decision (almost) a no-brainer. To get a better idea of the kinds of technologies that burgeoning healthcare apps are relying on, let’s take a look at the different areas of study within the Machine Learning space, as well as examples of their successful application in medicinal research. Starting with…

 

Natural Language Processing: Making Sense from Text

Natural Language Processing (per wikipedia) lies at the intersection of “computer science, information engineering, and artificial intelligence”. As the name might imply it deals with the interpretation of language by computers, either via text or voice analysis.

In a nutshell, NLP is used primarily to identify key elements (entities, intent, relationships) in free-form text, and make that data usable by computer systems in various ways. Digital personal assistants like Siri, Cortana, and Alexa make heavy use of concepts in NLP to do what they do.

Similar to the recent explosion of personal assistant AI in the consumer market, the healthcare industry is seeing a particularly striking surge in apps that use NLP to give our devices and physicians a better understanding of our health.

One popular example in recent years? Chatbots.

By using the previously mentioned entity recognition to extract structured data from a piece of text, and applying supervised learning classification algorithms to interpret the data’s meaning, chatbots can “understand” what the user wants to do while also collecting and storing the information they need to do it. In doing so, chatbots allow users to use the app by simply having a conversation, instead of filling out forms and selecting from dropdown menus. Examples of chatbots being used in healthcare abound, but it’s interesting to note that there has been a particularly striking uptick in use for the management of mental health, perhaps because of this more intimate interaction model.

XZEVN–a “well-being self-management” app–wants to give professionals a better understanding of their overall well being, and identify when and how to intervene. Topflight helped them do just that by developing a chatbot with them. The chatbot use information about a user’s mental state to recommend content from around the web to help manage their stress and bolster their decision making. By taking into account the user’s own stated goals, the chatbot also allows them improve their mental health according to their own needs.

But that’s just one example. Another is Tess from X2, who can have natural sounding conversations with patients about their mental health, give out surveys to patients on behalf of their psychiatrist, and help patients meet clinical goals alongside healthcare professionals. Acting as a friendlier way to populate a patient’s EHR by simply having a conversation, and decreasing patient levels of depression and anxiety along the way.

Once a labor intensive job necessitating complicated code and advanced domain knowledge, building chatbots can now be easily accomplished with one of the half a dozen frameworks available from big players like Microsoft, Facebook, Google, and Amazon. But if free and extensible is your jam instead, Rasa is an open source chatbot framework that does just as well as the big names, and with more customization to boot.

Chatbots may be capable of having natural-sounding conversations, but they still require the user to be guided down pre-written conversational paths. So what if the structured nature of chatbot conversations isn’t what you need? What if you need an open-ended AI that can respond to any question and give answers to them like an expert?

Well first of all you’ll need a fair amount of already labeled data for your domain. But what you need is a called a Question Answering System. Yes, that’s really what they’re called.

Unimaginative name aside, they can be fabulously complex, involving deep bi-directional autoencoders requiring 4 days of training time on the latest hardware built just for neural networks, or they can be relatively simple. One fairly inspirational example of a simple but effective QA system is Jill Watson, a teaching assistant who answers course questions with a 97% accuracy rating by drawing on years worth of previous questions and answers in an online forum.

The real testament to how far NLP has come is that Jill managed to fool her class of students studying AI into thinking she was a human TA, at least for a while.

 

Convolutional Neural Nets: Images, Diagnoses, and Time-series. Oh My!

Okay, NLP is cool and all, but what if your data isn’t text-based? What if you have loads and loads of MRI, x-ray, or other diagnostic images to process and draw insights from?

Well, as I’m sure you’re aware, the field of image recognition (and segmentation, and even generation) has exploded in recent years. And rest assured the recent advances in this field aren’t all labeling traffic signs and slapping Steve Buscemi’s face on a video of Jennifer Lawrence.

No, in fact deep convolutional neural networks for image classification are being shown to have almost magical levels of diagnostic accuracy in medicine. From breast cancer, to retinopathy, to live labeling of polyps during colonoscopies, deep learning is proving that human-level understanding of images may not be the Strong-AI problem that it was once thought to be.

But it’s not all  x-rays and mammograms in the medical deep-CNN space. Companies like Face2Gene are using facial recognition and images taken by smartphones to classify phenotypes and accurately diagnose developmental syndromes. Chances are if you have image data to analyze, CNNs are the way to go to reach state-of-the-art performance.

Keep in mind, though, that like all deep learning methods, deep convolutional networks require loads of training data to reach the levels of diagnostic certainty seen in the above examples. Thankfully, using Google’s repository of pre-trained deep convolutional networks alongside your own data can drastically cut down on the need for training samples in your chosen domain. Hurray for transfer learning!

On top of image recognition–with the modern ubiquity of sensors in phones and smart watches tracking us every day–the analysis of time-series data is also becoming more and more relevant. Because convnets by their very design seek to take advantage of the structure of data, they are perfect for the task. Time-series forecasting and classification, though perhaps not as flashy in the media, is another field that is seeing state-of-the-art results from Deep CNNs.

Topflight even has experience in doing just that. For one of our clients, we took EEG data containing the Delta, Theta, Alpha, Beta, and Gamma waves from a number of EEG electrodes over time–totalling 17430 data points each for a number of patients–and used that data to predict what conditions a patient might have. Once trained, our model could predict what condition a new patient coming into the system might have, and help reduce the time to diagnosis for the patient.

 

Beyond Deep Learning

Auto-encoders, entity-recognition, image recognition, pre-trained deep convolutional neural networks, it’s all too much! Despite what popular trends might have you believe Machine Learning is more than just deep neural networks.

In fact, some of the simplest machine learning algorithms can be used to great effect to do things like categorize patients based on their behavior, predict healthcare costs, and even determine whether patients will skip their next appointment. Even Watson, the AI that managed to beat Jeopardy champion Ken Jennings, ultimately uses a logistic regression model (and some fancy NLP and hard-coded rules) to make its predictions.

Some of these less complex machine learning algorithms may be far better suited to the mobile app space. Simpler algorithms tend to have smaller profiles, quicker execution times, and better explainability, which is especially important for the healthcare space, and which deep neural nets struggle with in particular. Whether these factors affect your decisions at the end of the day depends on how you decide to integrate your algorithm of choice into your app: by serving the model on the mobile device itself, or server-side in the cloud.

If you’d like to serve your model on the user’s mobile device itself then a smaller model with faster execution times would be ideal. Decision trees, small neural networks (also called multilayer perceptrons), linear or logistic regressions are the way to go here. Not only are these algorithms eminently simple to explain, once trained they tend to have a much quicker execution time and smaller memory profile.

Of course, you can always serve your trained models as a batch process, or from an API call, tutorials on how to do that are everywhere. However, as anyone who has deployed machine learning at any kind of scale can tell you: training models is a piece of cake compared to the complexity of deploying trained models in the cloud. So if your use case allows you to implement your trained models on a users’ mobile devices, despite the potential pitfalls, that means less infrastructure to manage, less network hops for your users, and a faster time to prediction leading to better user experience.

All hope is not lost for those that wish to deploy deep learning models on mobile devices, either. Many of the popular deep learning frameworks have turned their eye to the mobile space in recent years. Tensorflow Lite, Core ML, and Caffe2Go are just some of the frameworks allowing you to get your deep learning model onto a mobile device easily. They each make use of techniques like model compression, reducing input size, and constraints on network size to make it happen.

 

Conclusion

Whether you’re building a Chatbot to augment your users’ therapy sessions, analyzing EEG data from their peripheral devices, or screening their medical scans for anomalies, the healthcare industry has a deep need for machine learning solutions. From the clinic to the app store, there are more opportunities opening up for an enterprising app developer every day.

When it all comes down to it, though, the machine learning algorithm you choose for your killer healthcare app depends on the data you collect, what functionality you want your app to have, and what the limitations of your architecture are going to be. Topflight can help you navigate these complexities with ease. We’ll narrow down the space of possibilities and get you from idea to execution faster than a decision tree can spit out a prediction.

Okay, maybe not THAT fast. But just take a look at our previous work, and when you’re suitably convinced you can get started with a proposal right away on our website. We look forward to hearing from you!

Have an idea?

Let's Work Together

Our industry-leading expertise with app development across healthcare, fintech, and ecommerce is why so many innovative companies choose us as their technology partner.