web stats
Collaboration
Healthcare
Therapy Box Team
University
University
University

Constantly striving to find new and novel ways to provide an exceptional digital experience


Recipient of a Queen's Award for Enterprise in the innovation category. We have a track record of creating interesting new technology, and of finding new uses for established products, and we seek to include surprising and helpful features in each new product or update that we release.

 

Therapy Box was built on innovation

Our main motivation since day one has been to create apps for good through innovation. We develop or find new and novel technologies and use them within our apps to solve the problems that people face and to make their lives run more smoothly. By staying on the cutting edge we can ensure that our customers are the first to have access to the best new features.

eye-tracking
speech-recognisation

Word Prediction

Predictable was the first AAC app on the App Store with inbuilt, self-learning word prediction. With a keyboard and word prediction app, users with higher cognitive function had access to their whole vocabulary range, without the limitations of the other apps available at the time.

Word prediction inside Predictable has grown stronger and we have added double word prediction, editing options, autocomplete, autocorrect and dyslexia support into Predictable's proprietary word prediction engine.

Dysarthric Speech Recognition

Mainstream speech recognition does not work with dysarthric speakers, leaving a significant portion of people with disabilities unable to access their device in a way that should be both quick and simple.

In our VocaTempo project, we are working with the University of Sheffield and Barnsley Hospital to develop an app which can be trained to recognise dysarthric speech. Currently undergoing tests with end users, this project will make it easier for people with disabilities to access their devices. Dysarthric speech recognition technology has many potential applications in communication, accessibility and articulation rehabilitation, which we are committed to exploring in the coming months and years.

disability-detection
beacon-technology

Eye tracking

Eye tracking has traditionally required expensive hardware and relies on mouse emulation, which limited the number of devices on which it could be used. We are in the process of developing eye tracking software which relies solely on a device’s front facing camera. This will allow people with disabilities to interact with their devices more easily and at a lower cost than ever before. Compatibility will no longer be an issue; offering users a greater choice over their preferred device.

We will include device camera eye-tracking in the START project (being developed in partnership with the University of Reading), and will continue to refine our own eye-gaze system for use in our in-house range of AAC apps.

Screening tools to include machine learning

The diagnosis of many conditions relies on screening tests which are carried out by professionals. The status quo is for these tests are recorded and then reviewed after the fact. Speech pathologists are also currently required to take down a lot of notes, either during the session, (reducing patient interaction at the time) or afterwards (consuming more time in the long run).

Several of our current projects utilise existing software solutions to partially or fully automate screening tests, allowing professionals to interact more with their patients during sessions, making screening tests simpler to administer, and reducing the amount of time spent reviewing or transcribing tests subsequently. This gives professionals more time to spend face to face with clients.

Where there is no existing technology to improve the screening process, we are currently researching and developing new software through machine learning, to implement in the final product.

communication