SEARCH
Enter your search term below:
Close
Enter your search term below:
WORLD LEADING BUSINESS SUPPORT
Artificial intelligence (AI) has moved quickly from experimentation to expectation. For many businesses, the question is no longer whether to use AI, but how to scale it in a way that actually holds up technically, commercially and legally.
That was the focus of the latest Innovation Insights webinar, hosted by Mike Ahyow, Partner at RWK Goodman, which explored what sits beneath the surface of scaling AI. Not just models and performance, but the less visible layers: regulation, data governance, investor scrutiny and customer trust.
A regulatory landscape that’s anything but simple
If there is one thing to understand upfront, it is that AI regulation is rarely tidy.
In the UK, there is currently no single AI law to point to. Instead, regulation is spread across existing frameworks, overseen by different bodies depending on the sector – data protection, competition, financial services and more. It is a decentralised system, guided by broad principles such as safety, transparency, fairness and accountability.
That can be positive for innovative businesses, avoiding a rigid one-size-fits-all model. It also means companies need a clear understanding of which rules apply to their product, sector and route to market.
The wider regulatory picture, however, can often feel chaotic. Depending on your sector, organisations may need to consider GDPR, cybersecurity laws, export controls and other obligations at the same time.
The EU, by contrast, has taken a more structured approach with the EU AI Act. It categorises AI systems by risk, ranging from minimal to prohibited, and applies rules accordingly. The important detail for UK businesses is their reach: if your product touches EU users, you are likely to fall within that scope, regardless of where you are based.
For growing businesses, the practical takeaway is straightforward: regulation is no longer something to think about later. It now forms part of the scaling journey from day one.
What are funders looking for?
For grant funders, AI is compelling because of what it could achieve – solving problems, pushing boundaries and delivering wider societal value. Whether the focus is on healthcare, manufacturing, transport or professional services, funders are often interested in how AI can unlock measurable progress.
That potential requires credible foundations. In many cases, reviewers are experts in their own right, assessing whether a project’s proposed model, workflow or deployment plan genuinely stands up to scrutiny.
Data governance sits at the centre of this. If data is biased, incomplete, or poorly sourced, confidence in the project can quickly fade. That scrutiny only becomes sharper when public money is involved. Funders carry a responsibility to ensure public investment is used effectively, transparently and in the wider public interest.
There is also growing attention on ethics, compliance and long-term viability. Application forms may not always ask direct questions about AI governance, but stronger submissions should address those themes naturally within their wider responses.
As AI capabilities grow, broad assurances are no longer enough. Funders want to see how these issues will be managed in practice. They will also look closely at how risk is managed as a project moves beyond pilot stages and into wider deployment.
Innovation still matters, but it needs to be matched with clear thinking and credible delivery. For funders, the strongest proposals are not just ambitious – they show a level of control, awareness and readiness that suggests the project can stand up to real-world scrutiny.
What are investors looking for?
Investors in AI often explore similar themes, but with greater commercial intent.
Investors will often ask similar questions to funders, but not with the same scientific and ethical lenses; instead, to establish long-term commercial viability.
It is no longer enough to show that a model works; a promising research project does not automatically become a scalable AI business. Founders are often required to shift from an R&D mindset to a commercial one, translating technical strengths into a clear market proposition.
Similar to funders, investors will start with data. They will look closely at where your data comes from, whether datasets and model weights are properly licensed for commercial use, and how those licensing costs change at scale. Regulatory compliance and sector‑specific obligations also come into play, ensuring you have freedom to operate in your target markets, and that the business won’t later be undermined by legal barriers, unexpected costs, or easy imitation by competitors.
These are the types of questions that surface during due diligence, when investors look beyond the pitch deck and into the finer details. Often, the issue is not that the opportunity lacks promise. It is that certain assumptions have never been properly tested before due diligence.
What customers look for: does it work, and can we trust it?
Funders and investors may be backing future potential, but customers are buying something they expect to work now – reliably, securely and with minimal disruption.
This raises practical questions. Can the product integrate with existing systems? Will it perform outside a controlled environment? If something goes wrong, who carries the risk?
For larger organisations, adopting an AI solution from an early-stage supplier can feel exciting and uncomfortable in equal measure. Innovation is attractive, but operational risk is real.
Security and compliance therefore become central.
Customers want reassurance that deploying the AI tool won’t put them in breach of sector‑specific rules in areas such as defence, fintech, medtech or energy, and that sensitive data will not be exposed through weak points created by data flows, third‑party APIs, CRM integrations or cloud storage connections.
This is also where well-run businesses can stand out. Clear documentation, a visible governance process around data sourcing, licensing and regulatory compliance, and robust security measures.
Governance: the thread running through it all
Across all three groups – funders, investors and customers – one theme kept resurfacing: the importance of governance.
Governance isn’t simply corporate jargon, but the practical basis for strong data management, making decisions clearly and controlling risk as AI capabilities grow.
Governance should enable progress, not slow it down.
Timing is everything, however. These practices are far easier to build early than to retrofit later, once customers, investors and regulators are already asking difficult questions.
When the foundations are in place, funding conversations become smoother, due diligence becomes less painful, and onboarding customers becomes easier. Governance structures already aligned with those requirements become an asset rather than a liability when scrutiny arrives.
Scaling with AI: tips for success
Scaling with AI is not only a technical challenge. It is a commercial and organisational balancing act.
Understand your stakeholders: Funders want impact supported by credible thinking. Investors want businesses that can grow sustainably. Customers want solutions that work safely in the real world.
Governance and compliance are day one essentials: Treat data governance, security, licensing, and regulatory compliance as core design constraints, not afterthoughts. Document your processes so you can show your workings to grant funders, investors, and customers.
Plan for future markets and regulations: Preparing for every eventuality in a fast-moving AI space is a must. Whether you’re operating under the EU AI Act or even a potential UK-based AI regulation.
Upcoming Innovation Insights webinars
Want more from Innovation Insights? Register for our next webinars in May and June:
Get all the fresh insights first! Stay up-to-date with all the
latest investment news, blogs and all things SETsquared.
Close
Close