TensorFlow vs #COVID19

TensorFlow vs #COVID19

tl;dr Just take me to the demo already!

It's March 2020 and #COVID19 is all the rage nowadays; for all the wrong reasons.

Whether you're still working in an open office space or your company has done the responsible thing and sent everyone to remotely work from the comfort (and relative safety) of their home desk already, there are three things one can do to protect oneself from contracting the virus (other than avoiding public spaces):

  1. no handshakes,
  2. thoroughly wash your hands with soap and
  3. don't touch your face under no circumstances.

So, the motivation behind this blog post stems from a discussion with colleagues at work regarding that last point on the list; we all just seem to have the darnest of times keeping our hands away from our faces! I don't know if it's the nature of a desk job or whatever but using Tensorflow.js, some simple HTML and my laptop's webcam, I was able to count the number of times I touch my face during a typical 8 hours-long work day in the freaking hundreds...


Machine Learning to the rescue!

Or well, maybe just a simple TensorFlow model, some arbitrary weights after a whooping 5 minutes of training said model and 2 sets of 250 images for each of two image classes: my face and my hands.

The idea here is that you can use your laptop's webcam (which as a tech worker is more than likely just sitting there staring at your face not doing much anyhow) to prevent yourself from touching your face with your hands. Now, I say "prevent" but the best I could think of doing for the scope of this demo is play a really weird Zombie-like sound when the model detects your hand approaching your face in the camera's frame feed. I am sure y'all can get more creative than that though /wink.

Without further ado, the code for running this sample in the comfort of your own web server:

<button id="start-btn" type="button" onclick="init()" style="background-color: green">Start</button>
<button id="stop-btn" type="button" onclick="toggleDemo()" style="background-color: red;display: none">Stop</button>
<br />
<div id="covid-demo" style="display: none; text-align: center">
    <div id="webcam-container"></div>
    <div id="label-container"></div>
    <audio id="zombie-audio" controls>
        <source src="http://codeskulptor-demos.commondatastorage.googleapis.com/descent/Zombie.mp3" />
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.3.1/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@teachablemachine/image@0.8/dist/teachablemachine-image.min.js"></script>
Some HTML with control elements.
// the link to your model provided by Teachable Machine export panel
const URL = "https://teachablemachine.withgoogle.com/models/R1g777eG/";

let model, webcam, labelContainer, maxPredictions;

// Load the image model and setup the webcam
async function init() {
    const modelURL = URL + "model.json";
    const metadataURL = URL + "metadata.json";
    model = await tmImage.load(modelURL, metadataURL);
    maxPredictions = model.getTotalClasses();

    // Convenience function to setup a webcam
    const flip = true; // whether to flip the webcam
    webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip
    await webcam.setup(); // request access to the webcam
    await webcam.play();

    // append elements to the DOM
    labelContainer = document.getElementById("label-container");
    for (let i = 0; i < maxPredictions; i++) { // and class labels

async function loop() {
    webcam.update(); // update the webcam frame
    await predict();

// run the webcam image through the image model
async function predict() {
    // predict can take in an image, video or canvas html element
    const prediction = await model.predict(webcam.canvas);
    for (let i = 0; i < maxPredictions; i++) {
        const classPrediction =
            prediction[i].className + ": " + prediction[i].probability.toFixed(2);
        labelContainer.childNodes[i].innerHTML = classPrediction;
    // play sound to alert the user when their hand is close to their face
    if (prediction[1].probability.toFixed(2) > 0.9) {
        let zombieSound = document.getElementById("zombie-audio"); 

function toggleDemo() {
  let demo = document.getElementById("covid-demo");
  let startBtn = document.getElementById("start-btn");
    let stopBtn = document.getElementById("stop-btn");
  if (demo.style.display === "none") {
      demo.style.display = "block";
      startBtn.style.display = "none";
        stopBtn.style.display = "block";
    } else {
      demo.style.display = "none";
      startBtn.style.display = "block";
        stopBtn.style.display = "none";
The JavaScript code using TensorFlow.js

I used Google's TeachableMachines to train and host the TensorFlow model; it's quick and convenient for user cases involving binary model classifications and image-based data but you can always train and host your own model should you so decide. One additional benefit of using TeachableMachines for these kind of trivial experiments is that it allows exporting the model in various other Machine Learning frameworks, if that's your thing.


It's entirely likely that the data of my model are nowhere near granural enough to cover the characteristics of everyone's faces (spoiler: that's a certainty) so if you're looking to use the demo and you're not satisfied with the results, please create your own model instead and just replace the URL constant in the code sample when running it on your local dev environment.

Learning resources

If you're just getting started with TensorFlow, I can't recommend this masterclass by freecodecamp.org enough, it'll get you from zero to hero in under 7 hours.

Stay safe everyone!