GTT Toronto Summary Notes, Microsoft Soundscape, September 19, 2019

Summary Notes

 

GTT Toronto Adaptive Technology User Group

September 19, 2019

 

An Initiative of the Canadian Council of the Blind

In Partnership with the CNIB Foundation

 

The most recent meeting of the Get Together with Technology (GTT) Toronto Group was held on Thursday, September 19 at the CNIB Community Hub.

 

*Note: Reading Tip: These summary notes apply HTML headings to help navigate the document. With screen readers, you may press the H key to jump forward or Shift H to jump backward from heading to heading.

 

Theme: Microsoft Soundscape

 

GTT Toronto Meeting Summary Notes can be found at this link:

 

Ian White (Facilatator, GTT)

 

Jason opened the meeting by welcoming the two guest speakers from Microsoft, who joined via Zoom. They talked about Microsoft Soundscape.

Amos Miller introduced himself. He started off in the UK, and introduced Melanie.

Melanie Maxwell said that they are both calling in from Redmond Washington, and are both part of the Soundscape team. Amos explained that the team is spread out over the U.S. and the UK.

Amos began by describing how Soundscape differs from other GPS aps. We wanted to explore how we could use technology to enrich peoples’ awareness of their surroundings. How could we have a greater understanding of what’s around us, and where it is in relation to where we are, to aid with orientation, way-finding, and our experience out doors. The way we achieve that is through the use of 3D audio, or spatial audio. This means,  sound that you can hear, that sounds like it’s in space around you, not between your ears. You can imagine that if you were standing on a street corner, and there was a Starbucks across the road and to the right in front of you, you would hear the word, “Starbucks,” coming from that direction. Instead of Soundscape telling you there’s a Starbucks 200 metres in front of you and to the right, it will just say the word, “Starbucks,” and you will hear that it’s 200 metres in front of you and to the right, just from the nature of the way you hear it through the headphones. For the best experience, it does require stereo headphones, and we can have a long conversation about that; that’s definitely unusual, especially for our community when you’re out doors, and trying to hear the ambient sounds as well. There are very good solutions for that, so there is a lot of reasons why Soundscape persisted to advance the thinking and the experience. When you walk down the street, you will hear those call-outs in 3D around you, giving you that P.O.I. information. We’ll also talk about how you can navigate to your destination using what Soundscape refers to as the audio beacon.

Before I dive into that though, I’ll give some background to the project. I’m the Product Manager for Soundscape in Microsoft Research in Redmond. This work started out 4 or 5 years ago when I was still in the UK. I was involved with the local guide dog organization there, and working with them to try and figure out how technology can integrate into our own independence and mobility when we’re out and about, but in a way that enhances that experience. Some people from Microsoft started working with mobility instructors, and guide dog and cane users. We explored a range of ideas long before we figured out how to solve the problem. We landed on this notion of how important it is to enhance the awareness, but not tell the person what to do in that space. A lot of what orientation and mobility trainers will do with us is to work on a specific route, but especially how to perceive the environment, how we read the cues that the environment is giving us from a sound perspective, echo location, traffic noise, direction of the wind, the tactile feeling of the ground: all of the signals we can get from the environment in order to orient, and make good navigational decisions. The work that we did with Guide Dogs in the early days of Soundscape was really to see how we can build on that. The idea of sound playing a big role in the perception of the space, was really how this idea evolved. Soundscape as an ap, is the first incarnation of that idea.

The ap is free, and available from the Ap Store. It does rely on map data, and so it does need to be able to access that data. For the most part, it will download the necessary data from the environment that you’re in, and from that point forward it’s not using data. So it’s not constantly drawing on your data plan, but it does require one. We’ve tried to optimize it so that the data usage is minimal, and in certain situations, it will also work in areas where there is no data.

Bose frames are a very good way to get the stereo effect. Bone conducting headphones are another good way. EarPods or standard headphones will work, but they will block your ears to ambient sound. Putting it in one ear to keep the other ear free won’t be effective because you won’t get the signature 3D effect. Amos said that he personally likes EarPods because of their sound quality, and it’s possible to insert them lightly into the ear and still have ambient sound. Some sports headphones are a good solution too, Plantronics for example. This type of headphone rests around the back of your neck, and clips over the ear. They sit in front of the ear canal without blocking it. They’re used commonly by runners and cyclists.

Melanie then took over. She began by running through some of the core features. The demo she provides will be limited because it can’t be presented in proper 3D audio.

“I’m going to walk us through the home screen first. Our goal with anything we design is that we want it to be really simple to use, and accessible. One thing you’ll notice is that we don’t have a lot on the home screen. I’m going to walk us through the home screen. The, set audio beacon, is one of the largest buttons on the screen. There are also buttons for, my location, nearby markers, around me, and, ahead of me. There are two parts of Soundscape; there are automatic components, where you can put your phone in your pocket and hear things, and there’s an active component, which are the buttons on the home screen. For example, if you want to know more about your current location, you can tap the, your location, button. Tapping on it gives you information about nearby intersections, what direction you’re facing, and then what intersection is closest to you. If you’re inside, you might here that you’re inside. The callout will change depending on where you are. When your phone is in your pocket and you’re moving, Soundscape relies on directionality of movement from the phone itself.

Another callout we have is, what’s around me. You’ll get location names and distances of places around where you are. You can change a setting between metric and imperial. You have choices for the Soundscape voice as well, including a French Canadian voice. Soundscape uses GPS, so it will only work inside buildings if map data is available. Either way, accuracy inside a building isn’t going to be as good. We have had users make audio beacons inside buildings. This can work reasonably well in a very large building, but we’re not at a place of very good accuracy in buildings.

There are two ways of finding a building. One way is to create your own marker. This relies on the accuracy of GPS. We recommend that if you want to create a marker, walk around the location a bit, as in, walk back and forth in front of it, to allow the phone to get as pinpointed a location as possible. This should get your marker accuracy to within a few metres. You won’t get 1 metre accuracy. Don’t try to create the marker when you exit a building, because the phone won’t be pinpointed enough yet with GPS.

There is a more complicated way as well. Soundscape uses Open Street Maps, which is an open-source ap that anyone can update. A lot of the buildings in Open Street Maps have their entrances marked. If Soundscape can find a building entrance on Open Street Maps, it will default to using that. Adding something to Open Street Maps isn’t an accessible process unfortunately, because it’s visual map-based. If there’s a building entrance that’s particularly important to you, you could try to have someone go into Open Street Maps and enter it for you, and it will show up in Soundscape. Open Street Maps update themselves once per week, but it might take two weeks for it to show up in Soundscape. Markers that you create yourself with Soundscape show up immediately.

To create a marker at your current location, from the home screen, find the, mark current location, button, located near the top of the screen. Double tap that. If you start in a tutorial screen, you can dismiss it. A name will be automatically assigned, but you can edit it. Pressing done, means the marker will exist as a custom P.O.I. There’s another whole page of controls where you can edit and manipulate your markers.

This moves us on to a unique feature of Soundscape, beacons. Beacons are one way of navigating to a specific place. Instead of giving you step by step instructions for you to follow in order to find your destination, Soundscape creates a sound that emanates from the destination you’ve set, and you navigate from that. This is done by using a marker, and turning it into a beacon, then activating it.

Start by double tapping on the button on the home screen called, set audio beacon. On the next page, you have a few options. You can set an audio beacon on a marker you’ve already created, or you can enter an address that you want to find. You can also browse nearby places and choose one to place a beacon on. You can also filter nearby places by category, restaurants etc.

 

To set a beacon on an existing marker, from this page, double tap on the, browse your markers, button. Here, you can browse your existing markers. Double tapping on a marker will set it as a beacon.”

Jason added that he and Chris Chamberlin are producing a tech podcast, and one of their recent episodes was on Soundscape. In it, they do a stereo demonstration of setting and following a beacon. Listening to this episode with headphones will give a very accurate experience of using Soundscape.

Amos then opened it up for questions. One member reported that some of the stores Soundscape announced for her in real time, were closed. The response was that the ap is getting its data from Open Street Map, so if their data isn’t up-to-date, Soundscape won’t be getting accurate information. Amos made the point that there will always be a question mark between you and the technology. “In Soundscape, we try to stay on the right side of not pretending that we can do more than what we think we can. We’ll never give you an impression of greater accuracy than what we can actually give you with the technology. A great example of that is, if you’re navigating to somewhere and you get close, Soundscape will tell you you’re close, then turn off the beacon, leaving the specific locating of an entrance to you. There will always be cases where there’s a dissonance between the technology and your experience. We give you all the information we can, but you’ll always have to make sense of it based on your own senses. We had an early incarnation of the ap that tried to follow a road. Sometimes the data would be wrong, but testers would follow the beacon out into the middle of an intersection, even though all of their awareness of their surroundings tells them it’s not a good idea. All GPS aps will tell you to use your best judgment, and then they’ll give you instructions that are pretty difficult to ignore. We’ve always been very careful in the design of Soundscape, not to give the impression that it knows better than you about the space you’re in.”

A member asked whether they are considering adding functionality that would allow Soundscape users to update information in Open Street Map, using a Soundscape interface.

Melanie replied, “That isn’t something that’s on our immediate road map, but it is something we’ve discussed. There is a, send feedback, button in Soundscape where we welcome information. We can’t necessarily respond to every report by updating Open Street Maps, but we definitely do add our own updates routinely, so it’s worth reporting this way if you want to. Open Street Map is open source with a strong community, and we’ve found that if we flag a particular area as being poorly represented, the community will often step up to fill in the gaps. It may be useful for the visually impaired community in Toronto, to make contact with the Open Street Map community in Toronto to see if the two groups could work together.

Another member said that she finds it hard to operate the phone and work her dog. Is there another way to interface with the ap?

Amos responded that most of the information you need will be announced even with your phone locked and in your pocket. If you have the kind of headphones that have play/pause and fast-forward/rewind buttons on them, the play and pause button has a few functions. One press will mute or unmute Soundscape. A double press of that button will activate the, where am I, feature, and a triple press will repeat the last call-out. Bose Frames, Aftershocks and EarPods all have this functionality, and have good sound. We have worked hard for as much of a hands/free experience as possible. It’s a background or ambient experience for some users. Some people keep it on in the background while riding the bus and checking email. It’s a companion that you should be able to get used to without having to give it a lot of attention. Work on ignoring Soundscape

Soundscape does not work on Android phones. Jason and another member contributed that functionality on Android is important, because accessibility should mean being available on as many devices as possible. A member contributed that AMI research has shown that Android use among young people in the visually impaired community is higher, and rising. In general, iPhone use outstrips Android use in the visually impaired community in North America, but that’s definitely not true in other parts of the world.

Another member asked if there’s any consideration of using voice commands to run Soundscape. Amos replied that there are. IOS provides some even easier ways to do that now, with Siri shortcuts and so on. There are two reasons why we haven’t really got there. The first is that when you’re out doors in noisy environments, that’s not going to work so well, especially if your microphone isn’t quite where it needs to be, which can lead to frustration. Secondly, the direction of trying to minimize your need to even give Soundscape commands, is the goal as we try to optimize. There are certain situations, such as choosing a beacon, which is a handful when you’re on the go, and voice commands could simplify that. We look a lot at the telemetry of which buttons are being pressed and so on. When people are on the go largely, you don’t really need most of them. You don’t really need to pull the phone out and press buttons, especially with the headset buttons, but we do look at voice commands. It’s always good to hear people’s experiences and preferences in that regard.

A member asked if the ap will work with IOS13? Amos replied that it will, but be warned … There are a lot of warnings out there about IOS13 having a lot of its own accessibility issues. The recommendation is to wait a few days till IOS13.1 comes out.

A member said that she uses Bluetooth hearing aids, and that she was very impressed with how well Soundscape functioned with them.

Amos said, “We are both delighted to hear, we’re both smiling here.”

A member said he wasn’t clear how close or far you could be to a destination to use Soundscape, as it doesn’t give turn-by-turn directions. Should we be using it in conjunction with another ap?

Melanie replied that they have received similar feedback in the past. The current recommendation is that Soundscape can be used alongside other navigation tools. If you’re in a location that you’re not familiar with and you want a lot of detail about how to get there, Google Maps might provide really great turn-by-turn directions. You might then also use Soundscape to help you understand what’s around you as you move from point A to point B. When you’re in a space you feel more familiar with, you might know the general layout but you don’t know exactly where the building is. In that case you might set a beacon on the building and start making the necessary turns.

Amos added that you can do long walks with Soundscape, but that it’s really optimal around 400 to 150 metres. It’s often very good when you go somewhere using Google Maps and it tells you you’ve arrived, but you still don’t know where the building is. In that case, Soundscape can be very helpful. We do get the question of adding turn-by-turn directions to Soundscape, and we’re not ignoring that.

For the past year, we started to explore uses of Soundscape outside the area of city navigation and mobility. We started to explore, for example, the idea of using Soundscape for kayaking. You can use a beacon to keep oriented on a lake; you can hear where the shore is, or where you took off from. We’ve played around with trails and recreational experiences. We’re having a lot of interest and traction on that front. Personally, I think that the experiences people get in recreation are mind blowing. They’re just wonderful because of the level of independence it gives you. So if any of you are so inclined, I highly recommend for you to try it. We are doing some work with the local adaptive sports organization. We’ve set up a trial that enables them to curate a route which would then surface on Soundscape. They’re going to run their first adaptive sports kayaking program next week with Soundscape as a test. It’s something that’s different, and that we felt was very rewarding for participants.

A member contributed that the active tandem cycling and sailing groups in Toronto might want to connect with Amos.

A member asked what Microsoft is working on for the future of Soundscape.

Amos replied that the recreational aspect is something they’re really excited about, and also the Bose Frames. We have talked about a hands-free experience, and sensors built into the device that track your head movement, enabling us to improve the audio experience. Amos invited Jason, who has had the opportunity to try this type of Bose Frames, to describe the experience.

Jason explained that the newest Bose Frames will have a gyro/accelerometer in them. What it will allow you to do, is set a beacon in Soundscape, then locate it just by turning your head, and it’s really quite cool.

Amos added that it has some very interesting applications for what Soundscape can offer.

Jason asked how people can give feedback.

Amos answered that they can email soundscapefeed@microsoft.com and that comes to our team. There is also a feedback button in the ap itself.

Amos and Melanie signed off.

Jason then went through a few points.

All of the meeting notes are now up on the GTT website. He then demonstrated something that has been added to the website. Do not try this with Internet Explorer, you must use a modern browser. One of the links at the top of the page is for meeting notes. Jason opened the notes for May, 2019. Arrowing down from the main heading, you’ll come to a line that says, listen to this article, with a play button. This is a new feature, that will read you the article in the new Amazon Newscaster voice. If you would prefer a voice other than Jaws, or if you’re a large print user, this is an option. Jason did a demo of the high-quality voice. Any of the meeting notes you call up, will offer this option.

IOS13 was released today. If you have an iPhone6S or better, you  can run it. It’s probably a good idea to hold off on installing it. IOS13.1 should be out in 4 days or so. They released IOS13 a bit before it was ready, in order to align with the new iPhone release. IOS13 offers a lot of cool things. One of the coolest is that you can change all of your VoiceOver gestures. An example of why you might want to do this is, there are people who have a really hard time with the rotor gesture. You could change that to a different gesture. Also, if you have anything newer than an iPhone 8, you can turn the VoiceOver sounds into vibrations. There are several vibration patterns to choose from. We’re hoping to have a presentation on IOS13 next month.

Jason also announced a new tech podcast that he and Chris Chamberlin are doing. It’s through the CNIB Podcast Network, and it’s called the CNIB Smartlife Tech cast. It’s on most popular podcast platforms.

 

Upcoming Meetings:

  • Next Meeting: Thursday, October 17, 2019 at 6pm
  • Location: CNIB Community Hub space at 1525 Yonge Street, just 1 block north of St Clair on the east side of Yonge, just south of Heath.
  • Meetings are held on the third Thursday of the month at 6pm.

 

GTT Toronto Adaptive Technology User Group Overview:

  • GTT Toronto is a chapter of the Canadian Council of the Blind (CCB).
  • GTT Toronto promotes a self-help learning experience by holding monthly meetings to assist participants with assistive technology.
  • Each meeting consists of a feature technology topic, questions and answers about technology, and one-on-one training where possible.
  • Participants are encouraged to come to each meeting even if they are not interested in the feature topic because questions on any technology are welcome. The more participants the better able we will be equipped with the talent and experience to help each other.
  • There are GTT groups across Canada as well as a national GTT monthly toll free teleconference. You may subscribe to the National GTT blog to get email notices of teleconferences and notes from other GTT chapters. Visit:

http://www.GTTProgram.Blog/

There is a form at the bottom of that web page to enter your email.

 

 

 

Narrator Tutorial Podcast for Windows 10 Version 1809 by David Woodbridge

This document has been recently updated to include a 9th edition of the podcast.

Narrator Screen Reader Tutorial Podcasts by David Woodbridge

iSee – Using various technologies from a blind persons perspective.

 

Revised: February 24, 2019

 

Narrator is a screen reader  utility included in Microsoft Windows that reads text, dialog boxes and window controls in most applications  for Windows. Originally developed by Professor Paul Blenkhorn in 2000, the utility made the Windows operating system more accessible for blind and low vision users.

 

In the October 2018 release of Windows 10 Narrator’s functions and keyboard commands have been dramatically expanded.  We are now at a point in it’s development that it will start to rival the third party screen readers we have become accustomed to using in the Windows environment.  Finally, it might be said that PC computers purchased off the shelf are accessible to blind and low vision users out of the box.

 

The latest version of Windows 10 is the October 2018 Update, version “1809,” which was released on October 2, 2018. The below tutorial podcasts only apply to the latest version 1809, so please check to see the current version running in your computer.

 

How do I know what version I’m running?

To determine whether or not these tutorials apply to Narrator in your computer you can check your version number as follows:

 

  1. Press and release the Windows Key and type the word Run, or merely hold down the Windows key and press the letter R.
  2. In the window that pops up type the text, WinVer and press the Enter key. Typing immediately will replace any text that might already be there.
  3. The computer will display, and your screen reader will speak the version of your operating system. If it indicates you’re running version 1809 Narrator will function as outlined in these podcasts, however if your computer is still running an older version please disregard these tutorials for now.  Press the Space Bar to close this dialog.

 

The Complete Guide to Narrator on the Microsoft Windows Help Page:

Click here to access the Complete Narrator’s Guide on the Windows Help Page.

 

David Woodbridge produces great podcasts under the title, iSee – Using various technologies from a blind persons perspective.  Below are the links to each individual podcast for you to Stream in your favourite podcatcher.

 

Narrator Tutorial Podcasts from iSee – Using various technologies from a blind persons perspective by David Woodbridge:

 

  1. Demo of the Windows Insider build for the new Narrator Quick Start Guide
  2. Windows 10 Narrator Series Episode 1 – turning Narrator on and off
  3. Windows 10 Narrator Series Episode 2 – Narrator keys and Input (keyboard and touch screen) Learning mode
  4. Windows 10 Narrator Series Episode 3 – adjusting speech rate, Volume, Punctuation, and a tip on Verbosity

 

 

  1. Windows 10 Narrator Series Episode 4 – changing Volume, Pitch, Audio Ducking, and an initial intro to Scan Mode
  2. Windows 10 Narrator Series Episode 5 Startup options for Narrator including Narrator Home
  3. Windows 10 Narrator Series Episode 6 – Typing Echo and Keyboard Settings
  4. Windows 10 Narrator Series Episode 7 – Navigating within a document with Narrator keyboard commands
  5. Windows 10 Narrator Series Episode 8 – Scan Mode, Narrator Views, and using Narrator Gestures with the Touch Screen

Windows 10 Narrator Series Episode 9 – Navigating on the Web with Narrator

 

To subscribe to the “iSee – Using various technologies from a blind persons perspective” podcasts by David Woodbridge click on this link.

 

Thx, Albert A. Ruel

 

Narrator Tutorial Podcasts for Windows 10 by Blind Vet Tech Podcast

Narrator Screen Reader Tutorial Podcasts by Blind Vet Tech

 

Narrator is a screen reader  utility included in Microsoft Windows that reads text, dialog boxes and window controls in most applications  for Windows. Originally developed by Professor Paul Blenkhorn in 2000, the utility made the Windows operating system more accessible for blind and low vision users.

 

In the October 2018 release of Windows 10 Narrator’s functions and keyboard commands have been dramatically expanded.  We are now at a point in it’s development that it will start to rival the third party screen readers we have become accustomed to using in the Windows environment.  Finally, it might be said that PC computers purchased off the shelf are accessible to blind and low vision users out of the box.

 

The latest version of Windows 10 is the October 2018 Update, version “1809,” which was released on October 2, 2018. The below tutorial podcasts only apply to the latest version 1809, so please check to see the current version running in your computer.

 

How do I know what version I’m running?

To determine whether or not these tutorials apply to Narrator in your computer you can check your version number as follows:

 

  1. Press and release the Windows Key and type the word Run, or merely hold down the Windows key and press the letter R.
  2. In the window that pops up type the text, WinVer and press the Enter key. Typing immediately will replace any text that might already be there.
  3. The computer will display, and your screen reader will speak the version of your operating system. If it indicates you’re running version 1809 Narrator will function as outlined in these podcasts, however if your computer is still running an older version please disregard these tutorials for now.  Press the Space Bar to close this dialog.

 

The Complete Guide to Narrator on the Microsoft Windows Help Page:

Click here to access the Complete Narrator’s Guide on the Windows Help Page.

 

Blind Vet Tech Guides and Tutorials:

Are you a visually impaired Veteran interested in learning more about technology and adaptive software? Have you received a device, like an iPhone or iPad, from a Blind Rehab Center, but require more information on how to use it? Are you a visually impaired Veteran looking for a network of peers to assist you in determining if updating your device is the right choice? If you answered yes, or simply are interested in learning more about assistive technologies for blinded Veterans, the Blind Vet Tech Quick Guides and Tutorials podcast will assist you. Developed by blinded Veterans aiding our fellow peers adapt to sight loss, Blind Vet Tech focuses on iPhones, iPads, computers, other smart phones, and different technologies Veterans might receive to increase their independence.

 

To that end, BVT have produced a spectacular series of tutorial podcast episodes ateaching users how to maximize their use of the latest version of Narrator.  Below are Hyperlinks to each of the Blind Vet Tech Podcast episodes on the web.

 

Blind Vet Tech Direct Links to Narrator Podcast Episodes:

 

  1. Windows 10 Narrator Basics
  2. Navigating Webpages and Netflix With Narrator’s Scan Mode
  3. Narrator’s Five Best Windows 10 Fall Creators Update Features
  4. Activating Narrator
  5. Basic Keyboard Commands and Navigation
  6. Quickly navigate Edge, tables, and apps with Scan Mode On
  7. Learn how to read documents, apps, webpages, and much more with Narrator

 

To subscribe to the Blind Vet Tech podcast follow this link.

 

Thx, Albert A. Ruel

 

GTT Toronto Summary Notes, Seeing AI, TapTapSee, Be My Eyes and Aira, January 17, 2019

Summary Notes

 

GTT Toronto Adaptive Technology User Group

January 17, 2019

 

An Initiative of the Canadian Council of the Blind

In Partnership with the CNIB Foundation

 

The most recent meeting of the Get Together with Technology (GTT) Toronto Group was held on Thursday, January 17 at the CNIB Community Hub.

 

*Note: Reading Tip: These summary notes apply HTML headings to help navigate the document. With screen readers, you may press the H key to jump forward or Shift H to jump backward from heading to heading.

 

Theme: Seeing AI, TapTapSee, BeMyEyes and Aira

 

GTT Toronto Meeting Summary Notes can be found at this link:

 

Ian White (Facilatator, GTT)

Chelsy Moller Presenter, Balance For Blind Adults

 

Ian opened the meeting. Chelsy Moller will be presenting on recognition aps.

 

General Discussion:

  • We began with a general discussion. OrCam will be presenting at the White Cane Expo. AIRA will not. We’re still in negotiation to see if they will open up the event as a free AIRA event space. Apple will also not be there. They make it a corporate policy not to present at generalized disability events.
  • Ian raised the issue of getting a media error 7 when he’s recording on his Victor Stream. Is there a list of errors somewhere? Jason answered that perhaps it’s a corrupted SD card. A member said that there’s a list of errors in an appendix to the manual, which can be accessed by holding down the 1 key.
  • Michael asked if there’s a way to add personal notes in BlindSquare, such as, 25 steps. One recommendation was a document that you could access through the cloud. Another recommendation was to mark a “point of interest” in BlindSquare. When you do this, you can name it, so you could call it, Shoppers 25, to indicate 25 steps. Another recommendation was to make notes using the iPhone notes ap. Another recommendation was to set up geo-dependent iPhone reminders. Within a radius of the spot you want, your phone would just tell you whatever information you put in.
  • A member raised the problem of using Windows 10 and Jaws, trying to synchronize contacts email with Apple, and having duplicate folders in his Outlook email. Microsoft exchange might help.
  • Jason told the group that he has an Instant Pot smart available for sale. This is a pressure cooker that works with the iPhone, and it’s no longer available as an iPhone connectable device. He’s thinking $100, talk to him privately if interested.
  • Then he described a new keyboard he got. It’s a Bluetooth called REVO2, which he received as a demo unit. It’s got 24 keys. You can type on your phone with it, or control your phone with it. Its most useful use is when you need to key in numbers after having made a call, such as keying in bank passwords etc. Alphabetic entry works the way old cell phones did, press 2 twice for B. It has actual physical buttons. It can control every aspect of VoiceOver. You can also route your phone audio to it, so you’re essentially using it as a phone. It’s about $300. It can be paired to iPhone and Android. Here’s a link to the David Woodbridge podcast demonstrating the Rivo Keyboard:
  • A member asked if Phone it Forward is up and running. This is a program in which CNIB takes old phones, refurbishes them, then redistributes them to CNIB clients. Phone It Forward information can be found at this link.

 

Seeing AI, TapTapSee, Be My Eyes, and AIRA Presentation:

Ian introduced Chelsie, who is an Adaptive Technology Trainer, and Engagement Specialist. She’s here tonight to talk about recognition aps.

We’re going to focus on 4 aps, Seeing AI, TapTapSee, Be My Eyes, and AIRA.

  • Seeing AI is an ap that allows the user to do a variety of visual tasks, scene description, text recognition, vague descriptions of people, light levels, currency recognition, and colour preview. Each of these functions is called a channel. As a side note, Chelsie said that her iPhone10 uses facial recognition as your password. A store employee told her it wouldn’t work because it needs to see your retina, but this isn’t true; it works from facial contours.

Chelsie opened the ap. There’s a menu, quick help, then channel chooser. To get from channel to channel, flick up. She did a demonstration of short text with a book. It’s helpful for reading labels and packaging. Try to keep the camera about a foot above the text, and centred. This requires some trial and error. The document channel takes a picture of the text. It’s better for scanning a larger surface. Short text is also very useful for your computer screen if your voice software is unresponsive. Short text will not recognize columns, but document mode usually will. The product channel is for recognizing bar codes. This is a bit challenging because you have to find the bar code first. Jason said that it’s possible to learn where the codes typically appear, near the label seem on a can, or on the bottom edge of a cereal box. The person channel tells you when the face is in focus, then you take a picture. You get a response that gives age, gender, physical features, and expression. Chelsie demonstrated these, as well as currency identifier. It’s very quick. The scene preview also takes a picture, and gives you a very general description. The colour identification channel is also very quick. There’s also a hand writing channel, that has mixed results. The light detector uses a series of ascending and descending tones. Beside the obvious use of detecting your house lights, it’s also useful in diagnosing electronics. If you turn all other lights off, you can use it to see if an indicator light on a device is on.

Seeing AI is free. It’s made by Microsoft, who has many other ways of generating revenue.

  • TapTapSee is a very good ap for colour identification. This is always a tricky thing, because colour is often subjective, and is affected by light levels. TapTapSee takes a picture, and gives a general description including colour. For more accurate colour description, Be My Eyes and AIRA are better. TapTapSee is free.
  • Be My Eyes is a service in which a blind person contacts volunteers who help with quick identification or short tasks. Because they’re volunteers, the quality of help varies. You may have to wait for a volunteer. There’s a specialized help button. You can use Be My Eyes to call the disability help desk. This is useful if you need technical help from Microsoft, and they need to see your screen. This ap is also free.
  • AIRA is a paid service. Chelsie has been using it for a month. She’s very happy with it. It connects a blind user with a trained, sighted agent. This could be anything from “what is this product?” “I need to find this address,” I need to navigate through a hospital or airport. When you set up your profile, you can specify how much information you want in a given situation, and how you like to receive directions. They can access your location via GPS, in order to help navigate. They will not say things like “it’s safe to cross,” but they will say things like, “You have a walk signal with 10 seconds to go.” They’re seeing through either your phone camera, or through a camera mounted on glasses you can ware.

They have 3 plans, introductory, 30 minutes. You cannot buy more minutes in a month on this plan. You can upgrade though. The standard plan is 120 minutes at $100, or the $125 plan, that gives you 100 minutes plus the glasses. The advantage of this is that you can be hands-free when travelling. The glasses have a cord connecting them to an Android phone that has been dedicated to the AIRA function. Otherwise, you simply use your own phone with its built-in camera. This happens via an ap that you install.

The question was raised about whether the glasses could be Bluetooth, but the feedback was that there’s too much data being transmitted for Bluetooth to work.

On the personal phone ap, you open the ap and tap on the “call” button. With the glasses, there’s a dedicated button to press to initiate the call.

Chelsie spoke about how powerfully liberating it is to have this kind of independence and information. You can, read her blog post about her experience here

The third plan is 300 minutes and $190. All these prices are U.S.

Jason added that, in the U.S. many stores are becoming Sight Access Locations. This means that if you already have an AIRA subscription, use at these locations won’t count against your minutes. The stores pay AIRA for this. This will likely begin to roll out in Canada. Many airports are also Sight Access Locations. You can’t get assigned agents, but you may get the same agent more than once. If you lose your connection, the agent will be on hold for about 90 seconds so that you can get the same agent again if you call back immediately. For head phones, you can use ear buds or Aftershocks.

 

Upcoming Meetings:

  • Next Meeting: Thursday, February 21 at 6pm
  • Location: CNIB Community Hub space at 1525 Yonge Street, just 1 block north of St Clair on the east side of Yonge, just south of Heath.
  • Meetings are held on the third Thursday of the month at 6pm.

 

GTT Toronto Adaptive Technology User Group Overview:

  • GTT Toronto is a chapter of the Canadian Council of the Blind (CCB).
  • GTT Toronto promotes a self-help learning experience by holding monthly meetings to assist participants with assistive technology.
  • Each meeting consists of a feature technology topic, questions and answers about technology, and one-on-one training where possible.
  • Participants are encouraged to come to each meeting even if they are not interested in the feature topic because questions on any technology are welcome. The more participants the better able we will be equipped with the talent and experience to help each other.
  • There are GTT groups across Canada as well as a national GTT monthly toll free teleconference. You may subscribe to the National GTT blog to get email notices of teleconferences and notes from other GTT chapters. Visit:

http://www.GTTProgram.Blog/

There is a form at the bottom of that web page to enter your email.