Why I started this blog

I decided to make a blog so that I’d have a place to write about projects I work on as well as any interesting things I learn along the way. Specifically, I plan to keep notes while I’m working through problems and document my mistakes and successes. I’ve historically been pretty bad at doing this, but maintaining a blog should force me to improve in this area. Hopefully, this will help me work through problems and maybe arrive at new solutions more quickly. I also plan to make standalone tutorials when I find solutions to challenges I encounter. Ideally, this blog will help provide an easier path for anyone interested in similar projects and don’t know where to start. I like the idea of making my work available for others to build off so they don’t need to start from square one.

Blog Topics

  • Exploring the potential for combining modern machine learning (ML) models with real time 3d development platforms like Unity.

  • Leveraging modern computer vision models to map a user to a virtual environment.

  • Using animation applications such as Blender to create synthetic datasets for training ML computer vision models.

  • Creating ML models to assist artists in creating things in Blender.

  • Leveraging the capabilities of modern ML models for applications that are either impossible or way too expensive to have a human do.

Current Projects

Right now, I’m working on getting ML models to run using Unity’s Barracuda inference engine. My current goal is to map a users body pose, facial pose, and hand pose to a virtual character in Unity with just a regular webcam. I’ve started by using a PoseNet model to mapping the user’s estimated body pose to a virtual character.

Future Projects

  • Figuring out how to procedurally generate datasets in Blender for computer vision applications:
    • human pose estimation
    • gesture recognition
    • facial key point estimation
    • image classification
    • image segmentation