AI, Trust, and the New Era of Product Development

Article

Featuring

Maarten van der Heide

Share

As a product development executive, leader in hardware innovation, and consultant for startups and scale-ups, and currently Vice President of Product Development at Polaroid, I spend my days thinking about how artificial intelligence is reshaping the way we design, build, and think about hardware.

The AI gold rush isn’t hypothetical. It’s here. And like many leaders in hardware and connected product design, I’m experiencing a tension: excitement over a transformative new tool and concern about its rapid integration into workflows, often without structure, oversight, or even visibility.

Across our global teams, engineers are quietly experimenting with AI in their daily tasks. They're not hiding it, they’re just doing what good engineers do: testing the tools at their disposal. But silent adoption creates challenges. When teams work independently, the organization loses the ability to learn together, missing chances to develop coherent strategies for integrating and governing AI tools effectively.

Because in the end, successful integration hinges on trust.

This isn't the first wave of technological change in hardware design. We've previously experienced major shifts with the introduction of sophisticated computer-aided engineering tools, especially advanced multiphysics simulations like thermal analysis, fluid dynamics, and structural simulations. Initially, these simulation tools weren't immediately trusted or widely adopted. Engineers approached them cautiously, validating outputs carefully, and learning the boundaries of their reliability while improving their accuracy.

AI differs not just in scale but in speed and accessibility. Previous tools were introduced methodically and top-down, carefully tested by R&D before wider implementation. AI, however, emerged bottom-up, rapidly adopted by anyone with internet access and a laptop. This democratization is powerful, but it comes with risks.

"But silent adoption creates challenges. When teams work independently, the organization loses the ability to learn together, missing chances to develop coherent strategies for integrating and governing AI tools effectively."

Moving Faster, But With Intention

When engineers across disciplines (mechanical, electrical, firmware, product management, etc.) integrate AI independently, creativity and efficiency can flourish. Tasks can accelerate, concepts refine rapidly, and more ideas can be tested and improved.

However, acceleration without clear direction increases risks. Teams equipped with powerful tools but lacking shared guidelines might move faster, but potentially in the wrong direction. While AI can shorten development cycles dramatically, without careful application, it also increases the likelihood of costly missteps.

Faster development alone isn't the goal. Simply launching more products quickly isn’t enough. What truly matters is improving the quality and impact of the products we create. AI’s greatest value isn't just speed, it’s the ability to rapidly iterate, test, and refine ideas, leading us not just to more products but significantly better ones.

The most important factor in making AI work isn't speed, it's intentionally.

Critical Thinking in the Era of AI

AI doesn't replace critical thinking; it demands more of it. Historically, we've built trust in simulation tools through structured validation, learning their strengths, and clearly understanding their limitations. We taught engineers to critically evaluate simulation results, knowing these tools were helpful guides, not absolute truths.

Today, however, many treat AI outputs like definitive answers without enough skepticism. AI provides highly confident answers, but confidence doesn't equate to correctness. With traditional research and learning, sources are transparent and easily verified. With AI-generated content, the reasoning and sourcing behind answers remain opaque, making verification significantly harder.

To make AI a trusted partner, we must develop frameworks and practices that reinforce critical thinking, encourage scrutiny, and support thoughtful decision-making. Trust in AI isn't earned through casual use; it must be systematically verified, continuously challenged, and improved over time.

"My approach to AI is careful but optimistic. I use AI actively but always deliberately, like operating a powerful new machine whose limits aren't fully understood."


Navigating the Path Forward

AI is still maturing, quickly and unpredictably. Attempting to immediately standardize AI governance could quickly become outdated. Instead, a layered approach makes sense: beginning with internal policies, sharing best practices widely, and eventually converging on industry-wide frameworks as the technology stabilizes.

My approach to AI is careful but optimistic. I use AI actively but always deliberately, like operating a powerful new machine whose limits aren't fully understood. I encourage my teams to deeply engage with AI, question results thoughtfully, and develop strong critical thinking habits around the technology.

The future of AI depends heavily on the people who use it, not casually or passively, but intentionally and thoughtfully. As leaders, we must champion transparency, encourage disciplined experimentation, and equip our teams with the tools and mindset to think critically alongside AI.

We stand at a crossroads. AI can help us become more innovative or more careless. It can amplify excellence or accelerate mistakes. It’s up to us to guide AI thoughtfully and intentionally, ensuring we build a future that reflects the best of our human capabilities, enhanced, not replaced, by technology.

Maarten van der Heide

VP Product Development at Polaroid

Share

Share

Recommended

Recommended

2025 Enzzo, Inc. All Rights Reserved.

2025 Enzzo, Inc. All Rights Reserved.

2025 Enzzo, Inc. All Rights Reserved.