This is a transcript of the podcast episode Masatoshi Honda on navigating the ‘lab to market’ journey with AI governance featuring a discussion between GRIP’s Senior Reporter Vlada Gurvich spoke with Masatoshi Honda, Venture Partner at Lifetime Ventures and Associate Professor at Kyoto University.
[INTRO]
Vlada Gurvich: Greetings, everyone, and welcome to a Global Relay Intelligence and Practice, or GRIP, podcast. I’m Vlada Gurvich, Senior Editor for GRIP, talking to you from New York City. GRIP is a service that features a daily website of articles on a variety of compliance and regulatory topics, plus podcasts and other deep dives into compliance trends and best practices.
You can find the service at grip.globalrelay.com, and we hope you’ll connect with GRIP on LinkedIn. I’m so pleased to announce that today’s podcast session features Masatoshi Honda, venture partner at Lifetime Ventures and an associate professor at Kyoto University. I’m going to ask Masatoshi to please introduce himself and describe his background before we kick off the program. Over to you, Masatoshi.
Masatoshi Honda: Thank you so much, Vlada, for having me on this podcast. It’s an honor. I teach innovation management as an associate professor of innovation in Japan, mentoring research-based companies and young talent. Within this role, I’m passionate about bridging experimental-stage AI and biotech healthcare projects to real-world implementation, which serves as a translational engine from lab to market.
My work with venture capitals investing in the US, UK, and Japan supports this effort. I originally graduated with majors in materials and electrical engineering and started my career at McKinsey. Then I co-founded a biotech and agrotech venture in Asia, focusing on genome editing and next-generation sensing. Since moving to the US six years ago, I’ve been working across electrical engineering, healthcare, and AI, building bridges between New York and Tokyo. I also participate in global governance efforts, including the United Nations HESI working group on AI and the R&D implementation group.
Vlada Gurvich: Wonderful. Thank you. To set the stage, please tell us what lab to market means and how might all of this relate to compliance officers?
Masatoshi Honda: Sure. Lab to market represents the journey of transforming exponential innovation into real-world systems that can withstand operational scrutiny, market demands, and migration requirements. In fields like biotech, healthcare, and AI, as well as environmental engineering, we’re not just commercializing ideas – we’re converting high-impact, risky concepts into responsible infrastructures. At Kyoto University and other positions I hold, we work with interdisciplinary teams to identify market needs and viable pathways that de-risk technology for scalable adoption.
For compliance officers, specifically, the greatest opportunity lies in what we call “white space” – domains where markets are growing, but regulation hasn’t caught up. By involving compliance professionals as co-designers from the outset, rather than just enforcing regulations after the fact, we can create tremendous value. From an innovation management perspective, I teach that the early stages often involve working on ideas that are counterintuitive, sometimes using technology that doesn’t yet resonate with the mainstream but addresses urgent problems for a small set of users. Anticipating future needs requires not just market sense, but also regulatory foresight. In this sense, compliance officers aren’t just policy enforcers – they’re essential collaborators in shaping trusted pathways for growth and public adoption.
Vlada Gurvich: And in your experience, what are the key stages of the lab to market journey? Do these stages follow a similar pattern across sectors or does the pathway shift significantly depending on the field?
Masatoshi Honda: Well, the journey typically progresses through several distinct stages, but with important sector variations, I would say. First, there’s hypothesis and exploration, where we test initial concepts in controlled environments and refine them through iteration. Second, we have prototype development, focusing on proof of concept and validating core assumptions. Third, there’s company formation, where product planning and business model development occur. And finally, there are the growth stages, where we build systems that can operate reliably at scale.
So, in healthcare, this follows a structured clinical trial pathway. In fields like aerospace or energy, government engagement is standardized. But across all fields, innovation typically begins in academic labs, where imagination must be balanced with responsibility. What’s changing dramatically is the timeline in deep tech – areas like fusion energy, synthetic biology, or advanced AI, for example. Development cycles that once took decades are being compressed through public-private partnerships. This acceleration demands a new approach to compliance, not as a brake on innovation, but as a governance architecture that moves in parallel with technical development.
Vlada Gurvich: Thank you. It’s very interesting. And do you believe that the current university and startup structures truly support a smooth lab to market transition or could there be better ways to integrate research and funding so that science can move more effectively into society?
Masatoshi Honda: The current structure has evolved, but remains not optimal. Universities still maintain silos between research activities, funding mechanisms, and technology transfer offices, which may create diversity. Meanwhile, startups receive capital with sometimes minimal regulatory guidance, creating governance gaps that later emerge as possible crises. The convergence of software and hardware, especially in AI, has dramatically accelerated market entry across sectors, including hardware. For example, ChatGPT reached 100 million monthly users in just two months, something that would have taken traditional technologies years to achieve.
This unprecedented development speed requires new approaches. So I believe we’re seeing a structural shift from the serendipity-driven approach, where universities commercialize discoveries, towards more deliberate formation of technology solutions aligned with societal needs and market demands. And we often call this venture creation, which is separate from spin-outs. This shift requires compliance insight and future ethical standards to move upstream in the innovation process, which helps shape technologies before they catalyze into difficult-to-modify forms.
Vlada Gurvich: Thank you. And how does academic entrepreneurship differ between developing and developed economies? Do you think the core challenges are universal or are there structural or cultural dynamics that create fundamentally different lab to market realities?
Masatoshi Honda: Well, America has pioneered 30 models. China, the UK, Germany, Japan, South Korea, and Singapore have each developed distinctive approaches. For example, Saudi Arabia’s Vision 2030 initiative demonstrates how concentrated investment can accelerate the innovation ecosystem over the past five to ten years. An interesting evolution is happening in emerging economies too, across Southeast Asia, Latin America, and Africa.
We are seeing unique digital platforms that address local needs rather than mimicking Western models. Since regulations, social needs, and priorities vary significantly by region, different regulatory sandboxes are emerging, which affect how technologies can scale internationally. What is needed is a universal governance framework that balances innovation with responsibility. What is valuable is how these frameworks are implemented based on cultural, economic, and regulatory context. The most successful models recognize that compliance isn’t just risk mitigation, but a strategic enabler of sustainable growth.
Vlada Gurvich: Corporate investment is often seen as both a driver and a threat in the lab to market process. What’s your take? Also, given recent funding cuts and economic pressures in the US, do you foresee more academic labs pushing to scale their technologies into businesses out of necessity?
Masatoshi Honda: Well, corporate investment can significantly accelerate the lab-to-market process when aligned with long-term objectives. The key challenge is that business metrics and academic interests operate on different timelines and value systems. We’re already seeing privately led investment in fundamental research areas like longevity science and quantum computing.
This can be tremendously positive, providing resources that public funding alone can’t match. However, return-on-investment pressure can conflict with research ethics and public interest. There’s also the risk that public-minded design principles – such as governance and equity – will be subordinated to corporate strategy. This creates both challenges and opportunities. The challenge is maintaining proper safeguards without impeding innovation. The opportunity is designing a governance framework that creates competitive advantages for companies that prioritize responsible development.
Vlada Gurvich: Very interesting. And there is a growing belief that startups are not mini GAFAs, that is, not smaller versions of tech giants like Google, Apple, Facebook or Amazon, but rather socio technological test beds for future governance models. Do you agree with that idea? And if so, how do you see this play out in practice, especially in research driven or AI focused startups?
Masatoshi Honda: Well, of course, I strongly agree with that framing. Startups aren’t just smaller versions of tech giants. They’re experimental spaces where governance models can be tested alongside technology innovations. This reminds me of AlphaGo, which demonstrated with its famous move 37, a completely counterintuitive move that no human expert would have played, but proved brilliant. Just as move 37 broke from traditional thinking in Go, startups can pioneer approaches that break from conventional frameworks.
However, we face a fundamental tension: the massive funding flowing into AI startups, for example in Silicon Valley, combined with the mindset of “if I don’t do it, someone else will.” This competitive mindset risks creating a tragedy of the commons, where everyone optimizes for growth while neglecting collective risks or potential misuses. This is where compliance officers can add value, not by imposing rigid rules, but by helping design governance frameworks that enable innovation while preventing a race to the bottom that ultimately harms everyone in the future.
Vlada Gurvich: Speaking of AI inclusion in tech tools, let’s take a closer look at innovation and AI. Innovation today isn’t just about creating something new. It’s about creating systems that come with structured choices and accountability. In the lab to market process, how can researchers and founders ensure their innovations aren’t just technically novel, but ethically and socially grounded?
Masatoshi Honda: This perspective is very central to responsible innovation. Researchers need funding, so ethical considerations must begin at the project design stage, from the funding application stage onwards. In advanced economies, market mechanisms generally discourage deliberately harmful approaches. However, technologies frequently exceed their intended design parameters or experience unanticipated issues, such as data leaks or AI-related health concerns. Especially in AI, ambiguity is more prevalent, which is where startups thrive and where moral hypotheses can be tested.
Overregulation may leave only incumbents standing, with a false aura, I would say, of moral authority. And open-source AI for everyone may create benefits, but can also lead to chaos in some cases, such as fraud and scams. While tightly regulated AI control may lead to techno-feudalism, hyper-inequality, or concentration, the challenge for compliance is finding the appropriate balance for each application domain, rather than adopting one-size-fits-all approaches. And we need proportionate frameworks that match governance intensity with potential risk, while still enabling responsible experimentation.
Vlada Gurvich: As optimization becomes the goal in many AI driven startups, we often see institutional voids, unclear IP ownership, opaque algorithms and ambiguous accountability. How should founders and regulators address these gaps early in the lab to market journey to avoid downstream risks?
Masatoshi Honda: These voids create both challenges and opportunities. Taking AI agents as an example, systems that can independently design and execute optimized workflows with minimal human intervention, we face fundamental questions about intellectual property, decision transparency, and accountability for outcomes, for example. The most effective governance frameworks are co-designed with multiple stakeholders, including compliance teams, and provide real-world needs and industry insights. Successful frameworks don’t just restrict, they enhance functionality by providing reliability, building user trust, and reducing operational risks. This balancing act is important.
Too little governance leads to a “Wild West” scenario, where potentially harmful applications proliferate. Too much governance creates concentrated power in the hands of those who can afford compliance overhead. Effective compliance officers navigate this middle path, creating frameworks that enable responsible innovation while combating both extremes. For compliance context, this represents an opportunity to move beyond binary “approve or forbid” models towards more sophisticated approaches that create positive incentives for responsible innovation, while establishing appropriate guardrails around high-risk applications.
Vlada Gurvich: And I also wanted to ask you, tools like NVIDIA’s DCGM or Datadog track system metrics, but don’t yet offer transparency into energy use, emissions or algorithmic decision making. In your view, how urgent is the need for deeper auditability in AI systems going to market, especially in regulated industries like health or climate tech? And how does this play out in global markets, for instance, in Japan?
Masatoshi Honda: Auditability is rapidly becoming a baseline expectation in AI systems, particularly in regulated domains. Current monitoring tools track performance metrics, but provide limited visibility into decision processes or impact patterns. We need comprehensive approaches to AI infrastructure governance, providing real-time traceability across multiple dimensions, like energy consumption, decision flow, LR conditions and outcomes.
The multi-faceted accountability is essential for building trusted systems. The EU AI act established last year requires regulated industry to progressively implement these rules through 2026. High risk AI is shifting from being a black box solely under the responsibility of tech giants to an assurance model that respond to the needs of the companies using it.
This shift is driven by the growing mandates for output accountability, log retention and audit trails. Such measures are becoming essential. For instance, if output log from LLM are stored only at chat application or API provider level, as a result, companies are unable to verify or explain the contact collectness of generated outputs to user or auditors. Combined with other issues, this limitation significantly increases legal risk. Japan is now in coordination with international standards, with the AI promotion bill currently passing through the diet in May this year. For compliance officers, this creates opportunity to develop auditing frameworks that demonstrate both regulation compliance and broader trustworthiness to stakeholders. Those who can design audit mechanisms internationally that add businesses value beyond mere compliance will be particularly bearable in this evolving landscape.
Vlada Gurvich: And we’ve seen contrasting regulatory moves in the US, with states like Colorado and Texas taking different approaches while Virginia vetoed a key AI bill. How should, in your opinion, early-stage founders navigate this regulatory patchwork?
Masatoshi Honda: This can be similar to how environmental standards differ by states. So, for example, companies providing catalog product intelligence analyze compliance of diverse products in a data-driven way, offering this service to manufacturers, wholesalers, and retailers. Some startups will focus on this as an opportunity, and ideally, these services would be controllable and accessible via APIs. Founders are incentivized to focus on scaling, but founders developing AI for this market need what I would call a federated compliance mindset, rather than optimizing for any single regulatory regime.
A successful company builds systems with modular compliance layers that adapt to diverse requirements. So, this suggests the need to develop flexible frameworks that can accommodate differing regulatory approaches without requiring fundamental system redesigns. The most valuable compliance professionals will be those who can translate regulatory diversity into practical design principles that engineers and product teams can implement.
Vlada Gurvich: What is your advice to regulators who are still developing a regulatory framework around AI for regulated businesses? For instance, the SEC?
Masatoshi Honda: Well, my primary suggestion to regulators approaching AI governance is to design for integration rather than imposing something. The most effective frameworks are those that companies actively want to implement because they create clear value beyond compliance. Creating protected spaces for discussing emerging challenges can foster more honest dialogue, even in lab-to-market and innovation management between innovators and regulators.
The ball-parking templates and reference models across various fields give innovators clear targets to build toward. The most valuable regulatory approaches enhance rather than hinder functionality, including reliability, building user trust, and reducing operational risks while enabling continued innovation. So, for compliance officers, this creates opportunities to serve as bridges between regulatory objectives and business realities, translating between these sometimes divergent perspectives.
Vlada Gurvich: And to conclude, I’d like to ask you about your most recent projects involving AI tools. Which broader shifts or transformations in society or industry do you see them as most relevant to? And what kind of societal change would you most like to see for your work to achieve its fullest impact? Feel free to share as much or as little detail as you’d like.
Masatoshi Honda: Thanks for asking. I’m actually exploring multiple projects across several domains, including healthcare and AI. Within my original academic background in materials and electrical engineering, I’m particularly interested in neural interfaces and next-generation medical devices. But many of my projects are still in the lab phase and remain undisclosed, but I have filed five to 10 pending patents, some together with my clients and collaborators.
In areas such as AI infrastructure governance, and specialized healthcare domains like palliative care, and communication through personalized avatars built from private data. Beyond healthcare, I’m building teaching aids related to patents and also focusing on the challenge of commercializing underutilized intellectual property. Despite their potential, the vast majority of new patents are never activated or brought to market. So, if some listener works on something related or sees potential synergies, I’ll be happy to connect.
Vlada Gurvich: Thank you so much for coming today. It was a pleasure to have you.
Masatoshi Honda: Thank you so much. Thanks for having me.