Suggestions
Matthew Mazur
Founder and Data Consultant
Matthew Mazur is a software entrepreneur and analytics consultant, best known as the founder of Preceden, a SaaS timeline maker and roadmapping tool. He started developing Preceden in late 2009 and has been focusing on it full-time since September 2018. The tool is designed to help users visualize complex timelines easily.12
In addition to his work with Preceden, Mazur is also the founder of Alden Analytics, a consulting business that assists SaaS companies in leveraging data for growth. His experience includes serving as the data lead at Help Scout, where he worked on various analytical projects to improve business performance using data-driven insights.12
Before entering the tech industry, Mazur served in the United States Air Force for over nine years. He attended the United States Air Force Academy and held various leadership roles, including project manager for Air Operations Centers and director of communications for Air Force Special Operations in Iraq.12
Mazur resides in Cary, North Carolina, with his family and continues to be involved in software development and analytics consulting.2
Highlights
“An agent's harness matters almost as much as its model. If you have a bad harness, then you may as well have a bad model.”
It’s interesting to think about the possible upper limits of capability gains you can get from harness engineering.
As an extreme example, could you plug one of our frontier models into a cleverly-built harness and wind up with something indistinguishable from AGI?
Probably not… but maybe? Poetiq has shown via their ARC-AGI-2 performance that you can build a scaffold/harness that improves general reasoning, at least as measured by that benchmark.
What are the odds their approach can’t be significantly improved upon?
Poetiq achieved state of the art results on ARC-AGI-2, not by training a powerful new model, but by building a better harness for existing models to use.
From their site:
“The flexibility of our meta-system allowed us to achieve this within hours of Gemini 3’s release. At Poetiq, we do not need to build, or even fine-tune, our own large frontier models. Our meta-system is designed to automatically create full systems that solve specific tasks by utilizing any existing frontier model.”
And:
“Our system engages in an iterative problem-solving loop. It doesn't just ask a single question; it uses the LLM to generate a potential solution (sometimes code as in this example), receives feedback, analyzes the feedback, and then uses the LLM again to refine it. This multi-step, self-improving process allows us to incrementally build and perfect the answer.”
The code is on GitHub too:
It’s a reminder that capability emerges not just from the model, but from the system that guides it.