Opening context
A large, research-led university operates a highly complex digital environment, supporting both staff and students across multiple campuses, faculties, and modes of study. Teaching, research, and administration all depend on reliable access to core digital services—often under time pressure and in shared physical spaces.
Leadership had a growing sense that digital experience varied widely across the institution. Some services appeared to work well for certain groups, while others generated consistent frustration. However, these impressions were fragmented, shaped by local feedback and individual escalation rather than a shared view of reality.
The institution needed a clear, representative baseline of how technology was actually being experienced—across both staff and students—before making further decisions about prioritisation and investment.
The decision context
Senior leaders faced ongoing decisions related to connectivity, learning platforms, support models, and campus technology. These decisions carried real risk: uneven experience could directly affect teaching quality, research productivity, and student satisfaction.
The challenge was confidence. Leadership needed to understand:
- where experience issues were systemic versus local,
- how staff and student experience compared,
- and which services most influenced overall perception.
Without this clarity, there was a risk of optimising for the loudest issues rather than the most consequential ones.
Why existing signals fell short
Operational data showed availability, incidents, and response times, but did not explain how technology was experienced day to day in lecture halls, offices, or shared study spaces.
Local surveys and informal feedback provided colour, but varied widely by faculty and role. Student and staff experience were often considered separately, making it difficult to see where challenges overlapped—or where they diverged meaningfully.
As a result, leadership lacked a single, comparative view that could support institution-wide decision-making.
How Voxxify was used
Voxxify was used to establish a single, time-bound baseline across both staff and students. The objective was not continuous monitoring, but decision clarity.
The assessment captured lived experience across core services—including productivity tools, connectivity, access and authentication, teaching spaces, and support—allowing leadership to compare experience patterns by role, location, and service.
This created a shared reference point: one dataset that reflected how technology was actually experienced across the institution.
What changed as a result
The baseline surfaced several important realities. Overall experience was mixed, with clear variation by service and role. Some foundational tools were supporting work and study as intended, while others created recurring friction that was not visible in operational reporting.
Critically, the insight allowed leadership to distinguish:
- issues affecting both staff and students,
- challenges concentrated in specific environments (such as teaching spaces or connectivity),
- and services whose experience had a disproportionate influence on overall perception.
Conversations shifted from debating individual complaints to aligning on evidence. Decisions about prioritisation, communication, and next steps became more deliberate and less reactive.
Closing insight
By establishing a shared baseline of lived digital experience across staff and students, this university reduced uncertainty at a critical decision point. The insight did not prescribe solutions—but it changed the quality of discussion, grounding future decisions in a clearer understanding of reality.
This example reflects how one university used lived IT experience as an executive decision input. Application and outcomes will vary by context. Names are withheld to respect client confidentiality; the intent is to illustrate an approach, not a reference.
