Senior Data / Software Engineer

Matt Caddell

DevOps & Data Specialist, Avid Tinkerer, Technical Lead, Aspiring Home Chef & Coffee Snob

Senior Engineer with several years of experience working on data processing pipelines, AI, and the systems that sustain them.

Not a qualified front-end engineer; this site is based on a template from templatemo

Learn about me

WHAT I DO

Data Processing

Ensuring that data is delivered in a timely fashion from multiple sources and with the relevant transformations so it's ready to be used

Data Warehousing

Democratizing data access through appropriate warehouse design choices and data management architectures

"Data Ops"

The best data in the world is only useful if it's realiably delivered; data related technologies come with their own devops style challenges and considerations

ABOUT ME

Senior Data / Software Engineer

Both the business and tech side of every company has a growing need to access and form conclusions from all kinds of data, I help make that happen.

I have extensive experience doing with more traditional DevOps/SRE style work to optimize build and release pipelines, and maintain infrastructure across multiple cloud providers.

Attended the University of Illinois at Urbana-Champaign for undergrad, earning a Bachelor of Science in Computer Science with a specialty in "Big Data and Artificial Intelligence".

Outside of work, I'll fill my free-time sitting in coffee shops sipping coffee, in a garage wondering why I can't get an old motorcycle to start, or spending time with friends enjoying a nice afternoon.

I can also frequently be found in the kitchen trying something new, curled up on the couch with a book, or tending to my plants and garden space if I'm not wandering around New York City.

Years of Experience 7
Jobs Held 4
Number of Plants that Didn't Survive 2025 3
Average Cups of Coffee / Day 3
Number of Motorcycles in Garage 0

FAVORITE TECH

snowflake logo

Snowflake

It may be pricey, but the cloud agnostic aspect makes it easy to use Snowflake with any tech stack, and it scales with minimal effort.
spark logo

Apache Spark

After years of using standard MapReduce and Apache Crunch to write ETL pipelines, Spark has felt like a revelation.
airflow logo

Apache Airflow

You can run it anywhere, on anything, and accomplish almost any data related task with out of the box tools. The recent ability to extend Airflow to add your own new plugins makes it easy to build the new features you need.
aws logo

Amazon Web Services

If there isn't an AWS service that can already do at least half of what you need, it may be worth questioning if it's a good idea to do it at this point...
k8s logo

Kubernetes

Having a well managed set of kubernetes clusters has made my ability to stand up and test drive new systems and services a non-issue, and it allows for scaling and cost sharing across multiple teams and systems that I've previously never had.
spinnaker logo

Spinnaker

It has made writing and running complex release pipelines easy and saved hours of manual effort for teams I've worked with.
helm logo

Helm

You want to deploy a complex system entirely in kubernetes in seconds? Done. With almost no hassle, and single digit number of commands. Being able to test new tools in minutes has made vetting them easier than ever.
githubactions logo

GitHub Actions

It makes sense to have you code tested and built as close to the tool you use to host and review it, and this makes it quick and simple to do that. Also, anything is better than an unmaintained Jenkins instance.