Okay, maybe not banned, but close to it. Let me explain.
When most people think of communication devices, they think (today) of iPads and iPhones with communication software. When they think of that software, they think of two things: First, it needs to speak with a computerized voice. Second, it needs to use cartoon pictures that represent words, or, worse, phrases.
All of this is wrong. One of my preferred forms of AAC is a pencil and paper – it’s cheap, it’s hard to break, not a theft target, with it I can say literally anything I want to say, it allows me privacy, and it works nearly anywhere. Pretty much the current crop of iPad software fails in at least some of this.
I’ve written before about the problems with high tech AAC – but I also recognize there are good things about it (and thus I use it too). Too often high-tech is used when low-tech is better. For example, at an airport, pencil and paper works wonderfully, while iPads fail for many reasons. Airports are often bright, which makes iPads hard to use sometimes (not always though – sometimes they work fine in the light level). Airports are very loud, so the voice is next to impossible to hear coming from tiny iPad speakers (and the external speakers have extra problems). This means people need to read the screen, which often means handing a $500 device to a total stranger. I can take a paper note with me through the metal detector, but can’t take the iPad with me. Oh, the batteries don’t die either with pencil and paper, and if something goes wrong, replacement parts are available pretty much anywhere for pencil and paper (anyone depending on an iPad needs an iPad in reserve at every location they frequent (school, work, home) – sadly this isn’t something any funding agency understands).
But that’s the problem with the technology. The other problem is the language system used. There are good picture-based language systems (such as Minspeak), and there are also good non-picture-based language systems (such as English!). What these systems have in common is that they aren’t just stored words, they are an actual system. And the system isn’t just picking and organizing a bunch of words (“core vocabulary”), but thinking through things like conjugation and subtle variations in meaning (“I am going to the store”, “I went to the store”, “I will go to to the store”, “I am in the process of going to the store”, “I’ve gone to the store”, “I am at the store”). These should basically be the same button presses, maybe with one variation in the sequence, if a picture-based system is used. Usually, they aren’t. If you’re using pictures because someone lacks English literacy, you need your pictures to do this. You need a picture language, not just a bunch of pictures with 1-to-1 associations with words.
More concerning to me is stored phrases. I don’t think most adults with language literacy (picture language or standard language literacy) need more than maybe 5 or 6 phrases. Here’s what I use:
- I don’t speak but I can hear and understand fine.
- I use this to talk
- Thank you
Basically, I have the things I need to answer *very* quickly and I say *a lot* in there. I don’t have things like, “My name is” or “I like to eat tomatoes” or any such nonsense like that. I don’t get asked my name hundreds of times a day. I do answer yes/no questions a lot, and I need to explain that I don’t speak but am not deaf a lot. I also get questions about what I’m doing (“I use this to talk”). If I want a food dish with tomatoes, I can take a few seconds to spell that out. I’ve thought about adding a 6th, but haven’t gotten around to it yet: “I don’t understand sign language.” When I visited Montreal, I found, “Do you speak English?” to be useful, but in my normal travels, it’s not particularly useful.
The reality is that most of the things I say can’t be predicted in advance. Sure, I can try scripting them, but one of the reasons scripting sucks for teaching social skills is that people are a lot more dynamic than that. You can’t make the other person follow your script! If you could, stored phrases would be awesome. But they aren’t.
If you have language, do the following experiment to see what I mean. Pick a day to try to use note cards to communicate. Before you go out, write down everything you might need to say. Then try to use them to go about your routine. Feel free to add cards as needed (take a pencil with you). Put a tick mark on every card when you use it. Then, at the end of the day, count up the tick marks and divide by the number of cards – this tells you how often you use a given phrase. First, I bet you will be surprised by how many cards you need to make. But second, you’ll be even more surprised how little you say certain things you thought you say a lot.
Then, the next day, do the same exercise, but take a note pad and pencil instead of note cards. I bet you find this more convenient.
So why the focus on stored phrases? It’s two-fold. First, most people who historically used communication devices could not speak because of motor control issues (for instance, Cerebral Palsy). That’s who governments would buy expensive devices for. The biggest complaint then (and still to an extent now, even among people with relatively standard motor abilities) is the slow speed of communicating using a keyboard or touch screen or scanning interface (particularly for a scanning interface). If you can only get one word out every minute, you try to maximize your throughput – and so do the people who are waiting for you to finish your thought (often for selfish reasons). So stored phrases seem like a quick win! That’s where real language systems come into play – how do you give someone flexibility in communication at the same time you give them speed? You don’t use stored phrases. I will also get to a second point in a minute (talk about speed of communication…) – about how this doesn’t apply in the same way to many autistic people using communication devices that are properly specified for them.
Second, people think stored phrases are easier for people developing language. In some cases, that is true – but primarily as a language motivator. If you have a funny joke or useful phrase programmed into your device, and it comes out as language, you see the beauty and significance of language. It can be very useful in those situations, particularly with beginning language users.
But, for people who already have language, I’m sick of the focus on stored phrases and icons. We need to focus on the actual problem: slow communication speed. And that means an actual evaluation of the person’s abilities, not trying to find the fastest way they can use whatever the sexy technology of the day is (iPad / iPhone). For me, that’s a keyboard – I can type 100+ words per minute on a good quality clicky keyboard (the click is important – I need the auditory feedback and rhythm – most keyboards suck for that today, because they’ve gone to great lengths to become quiet!). For a lot of my fellow autistic people, I suspect they could type quickly too. Clearly not everyone can – and for people that can’t, appropriate input systems need to be considered. That means I don’t do great with an iPad! That’s okay (laptops are good and cheap these days).
But with typing being a potentially 100+ WPM input method, it needs to be considered more often. Studies show that at these speeds, word prediction, word completion, and stored phrases actually slow down the input – it’s quicker to just type the word than to cognitively process it, particularly in the badly designed systems that vary the placement of words in selection lists over time based on how you use them (thus preventing you from developing a muscle memory for the word). A good system, even if it isn’t used by a 100+ WPM typist, develops muscle memory. Even for slow input speeds.
In addition, the act of trying to scan a page to locate a word among a bunch of pictures or a completion list interrupts the communication process if I already know the word I want to use! And, for me, the biggest problem with speaking is that my own words interrupt my thought process. So I want to minimize interruptions, not add to them (“Okay, I want to say ‘store’, but I have to now find where the word is, and once I find it, I have to remember what came next”). That’s where muscle memory comes in – whether it is a picture-based system or spelling system. But typically stored phrase/word systems pay little attention to muscle memory (hint: if the layout is completely different for the person on their device at age 30 than it was at age 3, it’s probably not developing muscle memory). Anything that involves searching to find a word is a problem in this regard.
That said, I’m not the same as other people. My problem is that I need to say what I’m saying without internal or external interruptions (something very little literature discusses). Other people might have, for instance, word finding problems without working memory problems – for those people, I can see a well organized vocabulary, in a language system, as very helpful. But it still needs to be a language system, not just a list of words or a group of cartoon icons associated with words and phrases.
Finally, my biggest problem with most systems that don’t have complete vocabularies or which rely on stored phrases is that it becomes very difficult to tell other people about abuse or personal issues. How do you tell your girlfriend what you like in bed? Or how do you ask a doctor about birth control? Or tell a dirty joke? Or how do you say, “My mother (or speech therapist) is sexually abusing me?” when that person has access to your device too? You probably don’t want stored phrases available to everyone for these. There are some solutions to these – I believe any system that uses stored phrases needs to be user programmable, and there needs to be the capability to set up locked pages that nobody but the user can access (for instance, protected with a timed password). I’ve written software that does that and I’ve made sure to donate those ideas to the public domain – they are prior art and any vendor can implement them without concern over patents or copyrights. By allowing users programming access, you encourage experimentation with language and the communication of novel thought. That’s a good thing. What I hear from speech language pathologists (SLPs) is, “But the kid might screw up the programming.” Yep. That’s why you have a backup, and a good system lets you restore pages independently of each other and merge backups and the device’s changes selectively.
In closing, I don’t mean to say if you use stored phrases and they work for you that it is a bad thing. But I do think they are often used without a complete understanding of communication – since you understand how you communicate, feel free to use them when appropriate!