Introducing the Solaris Capability Diagnostic.

Annie Liao

<p>AI hasn’t been rolled out cleanly across organisations. It’s crept in through individual usage, side projects, and quiet experiments. Even when companies do invest in AI tools, the result is often the same: budget spent on platforms that employees don’t fully understand, use inconsistently, or ignore entirely.</p><p>What most companies have today isn’t a coordinated strategy, but scattered adoption across tools, teams, and roles.</p><p>Before you scale AI, you need to diagnose it.</p><p>Today, we’re introducing the Solaris<strong> Capability Diagnostic</strong>.</p><div class="video-embed"><iframe loading="lazy" frameborder="0" allow="autoplay; fullscreen" allowfullscreen="true" height="409" width="728" src="https://www.youtube-nocookie.com/embed/UeliQL4hyTE?rel=0&autoplay=0&showinfo=0&enablejsapi=0"></iframe></div><p>[<strong>Apply for early access →</strong>]<em><strong> <a href="https://solaris-demo.manus.space/">here</a></strong></em></p><p>The Solaris Capability Diagnostic is an AI-driven, role-specific diagnostic that maps exactly how AI is being used across your organisation, and where it isn’t. It adapts to each role - operations, sales, engineering, product, design - and reflects how AI actually shows up in real work.</p><p>Instead of a single score, you get a structured view of capability across the company. The system measures AI fluency, captures usage patterns across tools like ChatGPT and Copilot, and maps core jobs to be done across roles. This creates a clear baseline of how AI is being used today.</p><p>From there, everything is visible in one place. You can see your organisation in a single view, then drill down into teams, roles, and individuals to understand where capability is strong and where it breaks. Tool misuse, project gaps, and inefficiencies become clear quickly.</p><p>Once your baseline is mapped, patterns become obvious. The same jobs to be done repeat across teams. The same types of tasks consume time without adding much leverage. These are surfaced as concrete use cases - email handling, reporting, internal coordination - areas where AI can be applied immediately.</p><h3><strong>What you get</strong></h3><ul><li><p>a baseline of AI capability across your organisation</p></li><li><p>visibility into tool usage and shadow AI across teams</p></li><li><p>role-specific insights into projects and gaps</p></li><li><p>prioritised use cases ranked by impact and ROI</p></li><li><p>a clear rollout plan for where to deploy AI first</p></li></ul><div class="captioned-image-container"><figure><img height="736" width="1456" src="https://substackcdn.com/image/fetch/$s_!MgF1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F863cec82-1d31-4f85-a4bc-c465e9145bad_2886x1458.png"></figure></div><p>These use cases are then prioritised based on impact. What starts as scattered usage becomes a clear rollout plan, grounded in where AI will actually drive value. Tools like ChatGPT, Copilot, and Manus are deployed based on need, not assumption.</p><p>From here, teams can move directly into execution. The Capability Diagnostic feeds into a structured AI accelerator, where the identified use cases are built, tested, and rolled out across the organisation.</p><p>Over time, the system tracks adoption and change. You can see how usage evolves across teams, how behaviour shifts, and where capability improves. AI becomes something that can be measured and managed, not just experimented with.</p><p><strong>Hear from our design partners:</strong></p><p><em>“The survey didn’t just measure AI capability - it challenged how intentionally I’m actually using it. What stood out most was the gap between having access to AI and truly being enabled by it. It shifted my thinking away from tools and toward patterns, mindset, and how we design work around AI.”</em></p><p>- Kim Bussian, Product Owner @ VGL Publishing AG</p><p><em>“A structured, thoughtful way to assess both current proficiency and where AI could be applied more effectively day-to-day. It surfaced tools I wasn’t aware of and gave me a clearer framework for approaching AI adoption with real intentionality.”</em></p><p>- Diana Nanuti, Senior Software Engineer @ Chainalysis</p><p><em>“The survey gave me a clear, structured view of my AI capabilities - strengths and gaps included. It was a useful benchmark for figuring out exactly where to focus next.”</em></p><p>- Sau Ching, Data Informatics Manager @ Twilio</p><p>Solaris Capability Diagnostic is built on a simple idea. You can’t scale what you can’t see, and you can’t improve what you haven’t diagnosed.</p><p><strong>Early access is limited to 100 organisations.</strong></p><p>If you’re serious about understanding where your organisation actually stands before scaling AI, this is where you start.</p><p>[<strong>Apply for early access →</strong>]<em><strong> <a href="https://solaris-demo.manus.space/">here</a></strong></em></p>