Airacast Episode 20 Aira’s Street Crossing Policy

Hi all.  I listened to this podcast yesterday and got the low-down on how Aira has changed their street crossing policy.  Previously their Agents wouldn’t talk to you at all when you were crossing streets, however as of November 4, 2019 they are offering limited information during street crossings if you ask for it.  To me this is a huge game changer.  Check out the podcast link below.

 

https://overcast.fm/+QWHMkaT2M

 

Thx, Albert

 

Access: Technology lags for people with vision, hearing impairments, Victoria News

Access: Technology lags for people with vision, hearing impairments

Author: Nina Grossman

Date Written: Oct 23, 2019 at 9:30 AM

Date Saved: 10/28/19, 8:53 PM

Source: https://www.vicnews.com/news/access-technology-lags-for-people-with-vision-hearing-impairments/

This is the third instalment of “Access,” a Black Press Media three-part series focusing on accessibility in Greater Victoria. See Part One- Access: A Day in the Life Using a Wheelchair in Victoria, and Part Two- Access: Greater Victoria non-profit brings the outdoors to people of all abilities

Heidi Prop’s fingers run over the raised white cells on her BrailleNote Touch Plus. She easily reads more than 200 words per minute, consuming online content with the tips of her fingers faster than most people can with their eyes.

Without vision since birth, Prop doesn’t ‘see’ the words in her head when the pins pop up to form braille words on the android-based braille tablet, she instead hears them like a narrator. She’s sitting in an office at the Pacific Training Centre for the Blind (PTCB) in Victoria, but the braille display allows her to read and write almost anywhere. With a braille output, Prop can check her email, browse the web, download apps and more.

The device is a model of technology that’s added ease to her life, but not all aspects of digitization have made the same leap; many aspects of the internet remain hidden to the blind community.

For example, devices called ‘screen readers’ make web pages accessible, but often stumble when navigating inaccessible websites. Elizabeth Lalonde, PTCB executive director, opens a Wikipedia page on grizzly bears and a robotic voice begins washing over the screen at a rate too rapid for most of the sighted population to consume.

But before the screen reader reaches the information, Lalonde has to navigate a series of unlabeled links and buttons – small hurdles standing in front of the content she’s trying to reach.

PTCB helps people who are vision-impaired learn how to navigate the world around them – from crossing the street and taking transit to cooking dinner or reading braille.

The centre also focuses heavily on using the web – a skill more or less required in order to survive the modern world. But technology is advancing beyond the speed of accessibility, says Alex Jurgensen, lead program coordinator at PTCB, who adds that creators end up playing catch up, adapting their websites and devices for vision and hearing-impaired users long after initial creation.

“A lot of information is out there, but websites can often be inaccessible,” Jurgensen says, noting things such as forms, apps and anything with unusual or unlabeled text can pose a challenge. Scrolling through unlabeled links will have the voice reader say “link” with no further description and scrolling over an image with no alt text embedded in the code will simply read off the name of the image file.

Lalonde says Instagram, for example, is simply not worth using for the vision impaired. But it could be if people described what was in their photos, or if Instagram added an alt text option for each picture, so users could describe what they posted, such as “pug sits on a red blanket in the park on a sunny day.”

Jurgensen describes it as adding a ‘sticky note’ to your image – an easy step that allows those who are vision-impaired to access a prominent element of everyday internet use.

But some elements of the information age don’t adapt. For example: memes. Text created as part of an image is indistinguishable for screen readers. Jurgensen notes apps such as Skip the Dishes can be difficult too. Without labelled button options, he’s ordered food far spicier than he’s intended.

One exception is the iPhone, which becomes usable for vision-impaired users with the simple slide of a toggle that turns on ‘voice over.’

“Camera. Maps. Google. Finance Folder.” The robot voice used to guide drivers to their destinations guides Lalonde through her phone. She double taps on the screen when she’s ready to use an app.

But devices with built-in accessibility software are few and far between – a disheartening reality for the more than six million Canadians living with disabilities.

Lalonde and Jurgensen say websites and online content should be “born accessible,” with accessibility built-in as part of the creation, instead of as afterthoughts or available only through expensive or impractical add-on software.

People with vision-impairments aren’t the only ones facing challenges either. A huge number of videos fail to include subtitles or descriptions of content, throwing in barriers for anyone who has hearing impairments.

And the barriers are nothing new. The Web Content Accessibility Guidelines were published in 1999 by a group of international experts in digital accessibility. The guideline was used internationally to create digital accessibility policies.

The experts created a testing and scoring format for websites and programs, finding the most successful sites included criteria such as audio tracks (so people who are hearing impaired can understand audio information), the ability to re-size text, the ability to turn off or extending time limits on tasks, and designing consistently, so people will always know where to find what they are looking for when they are navigating the site.

READ ALSO: Victoria’s $750,000 accessibility reserve fund makes improvement ‘not the side project’

And while the Canadian Charter of Rights and Freedoms included people with disabilities when it was created in 1982, it’s only recently that a bill relating directly to accessibility was taken to the House of Commons.

The Accessible Canada Act (Bill C-81) received unanimous support in May and is in the final stages of becoming law. Accessibility Minister Carla Qualtrough called the bill “the most transformative piece of legislation” since the Charter of Rights and Freedoms and called its progress “a testament to the work, commitment and contributions of the Canadian disability community.”

The bill, still not fully formed, is expected to include digital content and technologies law, likely based on the Web Content Accessibility Guidelines – meaning a number of official sites might be scrambling to get their content up to code.

“A lot of the solutions are fairly simple,” Lalonde notes. “But it’s a question of getting businesses and innovators to adapt accessibility into their process from the start.

“It’s a catch-22,” she adds. “Technology has made a major difference in my life and I know [in] the lives of a lot of blind people because it’s allowed us to access so much more information than we could access before. In some ways it’s been absolutely phenomenal, but … the lack of accessibility keeping up with the technology – that’s the problem.”

Jurgensen nods. “No matter how many steps we take forward it feels like it’s a cat and mouse game, and we’re the ones who are one step behind.”

nina.grossman@blackpress.ca
Follow us on Instagram
Like us on Facebook and follow us on Twitter.

iOS 13 Tip: Quickly Activate Reader Mode in Safari | Thoughts from David Goldfield

In previous versions of iOS it was fairly easy to activate reader mode while on a supported page in the Safari Web browser. All that was needed was to navigate to the Reader button, located toward the upper left hand corner below the status line, and, if you are a VoiceOver user, double-tap. iOS 13…
— Read on davidgoldfield.wordpress.com/2019/10/20/ios-13-tip-quickly-activate-reader-mode-in-safari/

First Public Beta of JAWS 2020 Posted with Improved OCR, Form Control Handling, Blind Bargains by J.J. Meddaugh on September 17, 2019

First Public Beta of JAWS 2020 Posted with Improved OCR, Form Control Handling, More

Author: J.J. Meddaugh

Date Written: Sep 17, 2019 at 4:38 PM

Date Saved: 9/19/19, 11:33 AM

Source: https://www.blindbargains.com/bargains.php?m=20489

The first public beta of JAWS version 2020 has been posted. It’s free for JAWS 2019users.

This version includes a variety of enhancements, including several improvements for web users. Many websites will double-speak names of controls because of the way they were programmed. This beta aims to reduce much of this double-speak as you move through forms. Improved support for modern web apps which use their own keyboard hotkeys is now included, with JAWS remembering the state of the virtual cursor across tabs in Chrome. This is especially useful for sites such as Gmail. Other improvements will benefit users of Microsoft Word, the Zoom conferencing platform, and the Convenient OCR feature. Check the source link to get yur beta copy. Here’s a list of what’s new, taken from the public beta page:

New Features Added in JAWS 2020

The following features are new to JAWS 2020.

Reduced Double Speaking of Form Control Prompts When navigating and filling out forms on the web, it has become increasingly common for web page authors to include the prompt inside the control in addition to assigning an accessible Tag for the control. While non-screen reader users only see the written prompt, those using a screen reader are getting both the Prompt and accessible Tag in Speech as well as Braille if a display is in use. Often times, the web page author has assigned the same text for each, so it appears the screen reader is double speaking. In JAWS 2020, we have greatly reduced the amount of double speaking of form controls as you navigate using speech and Braille by comparing the prompt and these tags, and only speaking or brailling them both if they are different.

Note: For Public Beta 1, only the double speaking of prompts has been completed. The Braille representation will be corrected for Public Beta 2 in early October.

Zoom Meeting Scripts Added for an Improved Experience Thanks to Hartgen Consulting, basic scripts for Zoom are now included directly in JAWS and Fusion to improve the experience when attending Zoom Meetings. This platform is used for our quarterly FS Open Line program as well as the free training webinars we hold each month. These scripts offer a more pleasant experience by giving more control over what you hear, without interrupting the flow as users enter or leave the room or make comments. Press INSERT+H to view a list of JAWS keystrokes available in Zoom such as turning off alerts, speaking recent chat messages, and more. You can also press INSERT+W to view a list of Zoom hot keys.

Hartgen Consultancy also offers more advanced scripts for Zoom Pro if you are interested.

Enhanced JAWS and Invisible Cursor Support for Windows 10 Universal Apps For years, JAWS users have relied on the JAWS cursor (NUM PAD MINUS) and Invisible cursor (NUM PAD MINUS twice quickly) to review and interact with areas in an application where the PC cursor cannot go. This includes reading textual information which is on-screen but not focusable, and interacting with controls which are only accessible using a mouse as the mouse pointer will follow the JAWS cursor and NUM PAD SLASH and NUM PAD STAR will perform a left and right click. However, the Off-Screen Model (OSM) which has traditionally been used to support the JAWS and Invisible cursors is becoming less and less available as newer technology such as UIA, found especially in Windows universal apps like the calculator or the Windows Store, is now being used exclusively for accessing screen content. This results in the JAWS and Invisible cursors becoming unusable when attempting to navigate in those windows. All you would hear in those cases was “blank” as you reviewed the screen. This is because the modern technology currently in use is not able to be captured by the traditional Off-Screen Model. In those cases, the only solution was using the Touch Cursor, something most users are not as familiar with.

JAWS 2020 now detects when focus is in an application where the OSM is not supported and will automatically use the new JAWS Scan cursor in these situations. You will use all of the same navigation commands as you would with the traditional JAWS cursor or the Invisible cursors.

For example, if you open the Calculator or Windows Store in JAWS 2020 and press NUM PAD MINUS, you will now hear JAWS announce “JAWS Scan Cursor” as these are apps that do not support the OSM. You can then use the ARROW keys like you always have done to move by character, word, line, as well as INSERT+UP ARROW to read the current line, or PAGE UP, PAGE DOWN, HOME, and END. The mouse pointer will also continue to follow as it always has. The only difference is that the cursor does not move from top to bottom or left to right. Instead, it moves by element the way the developer laid out the app.

While this works in many places, there are still some areas where more work by Freedom Scientific is required. For instance, if you use Office 365, and try to read your Account version information with the JAWS cursor commands, it is still not possible to navigate and read in these places. That work is underway and we plan to have an update for this area in the 2020 version soon. Stay tuned.

Convenient OCR Updated to Use the Latest OmniPage The recognition engine used by the JAWS Convenient OCR feature has been updated to Kofax OmniPage 20, formerly owned by Nuance. This offers greater accuracy when recognizing the text from on-screen images as well as text from images captured with a PEARL camera or scanner.

For users needing to OCR using Hebrew or Arabic, these languages will be included in later public beta builds or by the final release at the latest. Once these languages are working, they will be installed with any English or Western European download of JAWS and Fusion.

Virtual Cursor Toggle Now Tab Specific in Google Chrome Today, there are many web apps where using the Virtual Cursor is not the best approach. An example of this can be seen if you use Gmail in the Chrome Browser. In these cases, it makes sense to toggle the Virtual Cursor off by pressing INSERT+Z and then use this application with the PC cursor. Many users also regularly open multiple tabs (CTRL+T) so they can easily access different sites such as GMail plus one or two other pages by moving between the open tabs using CTRL+TAB. This can become frustrating as you need to constantly press INSERT+Z to get the right cursor in use as you switch between tabs.

Beginning with version 2020, we are introducing an option to help JAWS automatically remember the state of the Virtual Cursor for each tab once you set it. It will also announce whether the Virtual Cursor is on or off as you move between various tabs. Once you close the browser, or restart JAWS, it will default back to its default behavior so you will need to set this each day as you use it.

For the Public Beta, this feature is not turned on by default. It will be enabled by default In later Beta builds. If you would like to try it out in the first Beta, do the following:

  1. Press INSERT+6 to open Settings Center.
  2. Press CTRL+SHIFT+D to load the default file.
  3. Type “Tab” in the search field.
  4. Press DOWN ARROW until you locate “Virtual Cursor On/Off based on Browser Tabs.”
  5. Press the SPACEBAR to enable the option and then select OK.

Note: If you choose to enable this feature in public beta 1, you will hear the announcement of the Virtual Cursor state in certain situations as you navigate. This will be corrected in subsequent builds. Contracted Braille Input Enhancements For ElBraille users as well as those who regularly use a Braille display with their PC, JAWS 2020 offers significant improvements when typing in contracted Braille. In particular:

  • You should now be able to enter and edit text in numbered and bulleted lists in Word, WordPad, Outlook, and Windows Mail.
  • Contracted Braille input is now supported in more applications including PowerPoint and TextPad.
  • Improved Contracted Braille input in WordPad, especially when editing a numbered or bulleted list created in Word and opened in Wordpad. This includes properly handling wrapped items which previously showed the number or bullet on subsequent wrapped lines, rather than indenting the text.
  • Improved Contracted Braille input in Chrome, Google docs, and other online editors which can create bulleted and numbered lists.
  • Typing rapidly using Contracted Braille in Microsoft Office as well as other applications should no longer result in text becoming scrambled.

General Changes in Response to Customer Requests • While browsing the internet, JAWS will no longer announce “Clickable” by default as you move to various content.

  • You should no longer hear the message “Press JAWS Key+ALT+R to hear descriptive text” as you navigate form controls and certain other elements on the web.
  • By default in Word and Outlook, JAWS will no longer announce “Alt SHIFT F10 to adjust Auto Correction” when you move to something that was auto corrected previously.
  • JAWS and Fusion will no longer gather a count of all the objects, misspellings, grammatical errors, and so on when a document is opened in Word. This will enable documents to load much faster, including very large documents containing a lot of these items. You can always press INSERT+F1 for an overview of what the document contains.
  • Improved responsiveness when closing Word after saving a document.
  • The AutoCorrect Detection option, previously only available in the Quick Settings for Word, can now also be changed in the Quick Settings for Outlook (INSERT+V).https://support.freedomscientific.com/Downloads/JAWS/JAWSPublicBeta

Source: JAWS Public Beta

Category: News

No one has commented on this post.

You must be logged in to post comments.

Username or Email:

Password:

Keep me logged in on this computer

Or Forgot username or password?

Register for free

J.J. Meddaugh is an experienced technology writer and computer enthusiast. He is a graduate of Western Michigan University with a major in telecommunications management and a minor in business. When not writing for Blind Bargains, he enjoys travel, playing the keyboard, and meeting new people.

 

 

 

Thx, Albert

 

-=-=-=-=-=-=-=-=-=-=-=-

Groups.io Links: You receive all messages sent to this group.

 

View/Reply Online (#20583): https://groups.io/g/GTTsupport/message/20583

Mute This Topic: https://groups.io/mt/34202922/355268

Group Owner: GTTsupport+owner@groups.io

Unsubscribe: https://groups.io/g/GTTsupport/leave/4180960/1392965003/xyzzy  [albert.gtt@ccbnational.net]

-=-=-=-=-=-=-=-=-=-=-=-

 

This blind woman says self-checkouts lower the bar(code) for accessibility | CBC News

If you have a visual impairment, the self-checkout phenomenon can make shopping a difficult and frustrating process.
— Read on www.cbc.ca/news/canada/newfoundland-labrador/self-checkouts-accessibility-concerns-1.5243720

BlindShell, Simple, intuitive and accessible phones for visually impaired

BlindShell, Simple, intuitive and accessible phones for visually impaired
Date Saved: 7/5/19, 1:50 PM
Source: https://www.blindshell.com/
Note: Check above and below links for videos about this device.

New BlindShell Classic
Over the past few years, we have sold phones for the visually impaired to thousands of customers across 20 countries. We have worked to create a phone that would be durable, stylish, and most importantly, easy to use for the blind and visually impaired. Based on the feedback and input from our users, we introduced the BlindShell Classic last year. This phone encompasses the best of what the world of mobile phones for the blind offers.
• Carefully designed keypad with comfortable buttons.
• Voice Control or tactile keypad for the simplest to use phone yet.
• Optimized shape, which perfectly fits your hand.
• Lifetime updates and fantastic support.

Blindshell Classic
• Single button quick dial
• SOS emergency button
• Quick menu navigation by shortcuts
• FM radio
• Calendar
• E-mail
• Voice control
• Text dictation
• Object tagging

BLINDSHELL 2 BAROQU
• Voice control
• Text dictation
• Object tagging
• Color recognition
• Mp3 and audio-book player
• GPS position
• Games
• WhatsApp
• Facebook Messenger

WHAT SEPARATES BLINDSHELL FROM THE REST?
First and foremost, it’s been designed to be helpful. No frills. We’ve listened to our customers and honed its features to be simple. The BlindShell Classic caters to the actual needs of visually impaired users. The physical keypad and large assortment of applications are designed and chosen specifically for the blind user’s needs.
It is truly intuitive to use. You can either use the keypad or control your phone by voice. And yes, you’ll figure out how to operate it in less than 30 minutes.
Lastly, we wanted to develop a phone which will last. That’s why we carefully chose the BlindShell Classic design to be practical, sturdy, and easy to use. The lifelong free updates give peace of mind that you will be happy with your purchase for years to come.

Demonstration Video Re-posted from Carrie Morales, Live Accessible:
Hey Everyone,
The BlindShell Classic Phone is coming out to the US and it’s a phone that’s specifically designed for the blind and visually impaired. It’s a great option for someone looking for a phone that has physical buttons, very easy to use, and totally accessible. Here’s a review I did of the phone if anyone is interested! https://youtu.be/XSE8grhy_8g

Carrie Morales
Website: LiveAccessible.Com
YouTube: Live Accessible
Instagram: @LiveAccessible
Twitter: @LiveAccessible
Email: carrie@liveaccessible.com

*Picture Description: Text reads Live accessible: blindness or Low Vision does not define or limit you on a blue background

iPadOS 13 Features: What’s New for iPad, iPad Pro and iPad Air by Khamosh Pathak

iPadOS 13 Features: What’s New for iPad, iPad Pro and iPad Air

Author: Khamosh Pathak

Date Written: Jun 3, 2019 at 5:00 PM

Date Saved: 6/4/19, 9:32 AM

Source: http://www.iphonehacks.com/2019/06/ipados-13-features-whats-new.html

 

Apple is finally taking the iPad seriously. And their way of showing it is a whole new OS specially designed for the iPad. And they’re calling it iPadOS. While iPadOS shares a lot of features with iOS 13, it adds many iPad specific features for enhances multitasking, file management, Apple Pencil use, and pro app usage. Here are all the new iPadOS 13 features you should care about.

iPadOS 13 Features: Everything That’s New 1. Dark Mode

 

iOS 13’s new Dark Mode is also available on iPadOS 13. It is system-wide. It extends from the Lock screen, Home screen, to stock apps. Apple has even integrated dynamic wallpapers that change when you switch to dark mode.

Dark Mode can be enabled from the Brightness slider and it can be scheduled to automatically turn on after sunset.

  1. Multiple Apps in Slide Over

 

iPadOS 13 features a bit multitasking overhaul. And it starts with Slide Over. Now, you can have multiple apps in the same window in Slide Over. Once you’ve got one floating window, you can drag in an app from the Dock to add more windows to it. Once more than one app is added to Split View, you’ll see an iPhone style Home bar at the bottom. Swipe horizontally on it to switch between apps just in the Slide Over panel. Swipe up to see all apps in Slide Over.

  1. Same App in Multiple Spaces

The next big thing is the fact that you can have multiple instances of the same app in multiple spaces. This means that you can pair Safari with Google Docs on one Space, Safari and Safari in another space and have Safari and Twitter open in yet another space.

And this works using drag and drop. You can just pick a Safari tab from the toolbar and drag it to the right edge of the screen to create another instance of the app.

  1. App Expose Comes to iPad

App Expose on iPad answers the question, how do you keep track of the same app across multiple spaces? Just tap on the app icon that’s already open and it will open App Expose. It will list all instances of the open app. You can tap on a space to switch to it or swipe up to quit the space.

  1. New Tighter App Grid on Home Screen

Apple has also tweaked the iPad Home screen grid so that you now have a row of 6 icons on the 11 inch iPad Pro.

  1. Pin Today Widgets on Home Screen

If you swipe in from the left edge of the Home screen, you’ll find that the Today View widgets will be docked to the left edge. And you can see and use all your widgets easily. But you can also pit it so that it’s always available (from the Edit menu).

  1. Favorite Widgets for Home Screen

You can also pin your favorite widgets to the top so that they are always accessible.

  1. 30% Faster Face ID Unlocking

The new iPad Pros with Face ID now unlock up to 30% faster when running iPadOS 13.

  1. New Reminders App

The new Reminders app is also available on the iPad and it looks gorgeous. The sidebar has the four filters at the top, and your lists below. You can quickly tap on a list, see all reminders and create new ones. New reminders can be created using natural language input.

  1. Real Automation in Shortcuts App

There’s a new Automations tab that brings real-world automation to the iPad. Shortcuts can now be triggered automatically based on time, location and even by using NFC tags.

  1. Improved Photos App

Photos app brings an improved browsing experience. There’s a new Photos tab that is a list of all your photos. You can pinch in and out to zoom. From the top, you can switch to the Days tab to only show the best photos from a given day. The same goes for the Months tab as well.

  1. New Photo Editor

There’s a new photo editor in the Photos app. Just tap on the Edit button to access it. The new UI is much more visual and easier to use. All the standard tools are available, along with new tools for editing Brilliance, Highlights, Shadows, Saturation and more. There’s also a very good auto-enhance mode.

  1. New Video Editor

The new Video editor is also quite good. You can quickly crop videos, change the aspect ratio, rotate videos and more..

  1. Access Apple Pencil Tool Palette Anywhere Apple is integrating the Apple Pencil deeply into iPadOS. The new Pencil Tool Pallete will be available in more apps. And it can be minimized and moved around easily.
  2. Reduced Apple Pencil Latency

Apple Pencil is even faster with iOS 13. The latency has been reduced from 20ms to just 9ms.

  1. Full Page Markup Anywhere

You can swipe in from the bottom corner of the screen using the Apple Pencil to take a screenshot and to start annotating it. You’ll also see an option to take full page screenshot in the right side.

  1. Scroll Bar Scrubbing

You can grab the scroll bar from the right in any app and quickly move it up or down to jump to the particular part.

  1. Use your iPad As Second Mac Display

Apple’s new Sidecar feature will let you use the iPad as a secondary display for a Mac that’s running macOS Catalina. It will work both wirelessly and using a wired connection. It’s quite fast and there’s no latency.

  1. Use Your iPad As a Mac Tablet with Apple Pencil If you have an Apple Pencil, you can use the attached iPad as a drawing tablet for your Mac.
  2. Easily Move The Cursor Around

Apple is also taking text selection seriously. You can now just tap and hold on the cursor to pick it up and instantly move it around.

  1. Quickly Select Block of Text

Text selection is way easier now. Just tap on a word and instantly swipe to where you want to select, like the end of the paragraph. iPadOS will select all the text in between the two points.

  1. New Gestures for Copy, Paste, and Undo Once the text is selected, you can use gestures to copy it. Just pinch in with three fingers to copy, pinch out with three fingers to paste and swipe back with three fingers to undo typing or action.
  2. Peek Controls

There’s no 3D Touch on iPad looks like there’s no need for it. You can tap and hold on app icons and links to see the preview and actionable items. This works very well in apps like Safari.

  1. New Compact Floating Keyboard

You can detach the keyboard in iPadOS 13. It turns into a floating window, with a compact view that can be moved around anywhere.

  1. Gesture-Based Typing on the Compact Keyboard You can type on the iPad’s software keyboard using gestures. Just glide your finger on the keys instead of typing on them. It’s similar to SwiftKey.
  2. New Start Page and UI for Safari

Safari gets a slightly refreshed UI and a more feature-rich Start page. You’ll now see Siri suggestions for websites and pages in the bottom half. Plus, there’s a new settings screen where you can increase or decrease the font size of the text (without zooming into the page itself).

  1. Desktop Class Browsing in Safari

Safari automatically presents a website’s desktop version for iPad. Touch input maps correctly when a website expects mouse or trackpad input. Website scaling takes advantage of the large iPad screen, so you’ll see websites at their optimal size. And scrolling within web pages is faster and more fluid.

  1. Full Safari Toolbar in Split View

Now, even when you’re in Split View, you’ll see the full tab toolbar. This makes it easier to switch between tabs and perform actions.

  1. Open Three Safari Web Pages At The Same Time Thanks to the new multitasking features, you can basically have three Safari tabs open together at the same time. First, take a tab and put it into Split View. Next, take a tab and put it in Slide Over!
  2. Safari Gets a Full Fledged Download Manager Safari gets a download manager on both the iPhone and iPad. When you visit a link that can be downloaded, you’ll see a popup asking if you want to Download the file. Then, a new Download icon will appear in the toolbar. Tap on it to monitor all your downloads.

Once the download is finished, you’ll find it in the Downloads folder in the Files app, It will be stored locally.

  1. New Super-Charged Share Sheet

Share sheet gets quite a bit overhaul. On the top is a new smart sharing option with AirDrop and contact suggestions. The whole actions section has been redesigned and it’s now a vertical list of actions. All available actions for the app are listed here in a long list. There’s no need to enable or disable action anymore.

  1. Create Memoji on Any iPad

You can now create multiple Memojis on any iPad with an A9 processor and higher. Memoji creation is also much better now.

  1. Share Memoji Stickers From iPad

Once you create a Memoji, Apple will automatically create a sticker pack for you. It will be accessed in the iMessages app and in the native keyboard so you can share the sticker using any messaging app.

  1. Desktop Class Text Formatting Tools for Mail App Mail app has a new formatting bar. You can change the font, font size, indentation and lot more.
  2. New Gallery View in Notes App

Notes has a new Gallery view which shows all photos, documents and attachments at a glance.

  1. Audio Sharing with AirPods

When two AirPods are active, you can now send a single stream of audio to both of them.

  1. Manage Fonts Easily on iPad

iPadOS 13 will let you download and install fonts from the App Store. And you’ll be able to manage them from Settings. Once added, a font will be available across all supported apps.

  1. A New Detailed Column View for Files App Files app has a new detailed column view, similar to the Finder app. It will help users quickly drill down into a complex nested folder structure.
  2. Quick Actions

When you’re in the column view and you select a file, you’ll see quick actions for it right there below the preview. You can convert an image to a PDF, unzip files and more.

  1. New Downloads Folder

There’s finally a designated Downloads folder in the Files app. Safari and Mail apps use this for now. But I hope third-party apps will be able to use it as well.

  1. Create Local Storage Folders

One of the biggest annoyances of the Files app has been fixed. You can now create folders for the local storage on the iPad. There’s no need to use iCloud Drive every time. Apps will be able to use these folders as well.

  1. Zip and Unzip Files

Files app will help you quickly unzip and zip files.

  1. Easily Share iCloud Drive Folder With Anyone You can easily share iCloud Drive folder with any user from the Files app. This will ease the collaboration process for iPad Pro users.
  2. Add File Servers to Files App

You can also add remote file servers to the Files app.

  1. Connect External Hard Drive, SD Card Reader or USB Drive to iPad You can finally connect any USB external drive to the iPad Pro using the USB-C port. And now it will show up as a USB drive in the sidebar. It will work just how it works on the Mac. You’ll be able to access all files, copy files over, move files and even save files from apps directly to the external drive.
  2. Mouse Support Using Accessibility

There’s official support for an external mouse on the iPad. But it’s accessibility support. Basically, the cursor is imitating a touch point. You can add a Bluetooth mouse from settings. A wired USB-C mouse will work as well.

  1. Unintrusive Volume HUD

Volume HUD now shows up at the top status bar, in a small pill-shaped slider.

  1. Wi-Fi and Bluetooth Selection from Control Center If you tap and hold the Wi-Fi or Bluetooth toggle you’ll be able to switch between networks right from Control Center now.
  2. iOS 13 Features in iPadOS 13

There’s a lot more to iPadOS 13. The smaller features from iOS 13 have been carried over to the iPadOS as well. Features like:

  • Improved Siri voice
  • Voice Control
  • Newer Accessibility options
  • Low Data mode for Wi-Fi networks

We’ve outlined these features in detail in our iOS 13 roundup so take a look at that list to learn more.

Your Favorite iPadOS 13 Features?

What are some of your favorite new features in iPadOS 13? What did we miss out featuring on this list? Share with us in the comments below.

 

 

Yes, Alexa, Siri, and Google are listening — 6 ways to stop devices from recording you by Janet Perez, Komando.com

Yes, Alexa, Siri, and Google are listening — 6 ways to stop devices from recording you

komando.com

 

Yes, Alexa, Siri, and Google are listening — 6 ways to stop devices from recording you

Janet Perez, Komando.com

Full text of the article follows this URL:

 

Seems like we owe the tinfoil hat club a big apology. Yes, there are eyes and ears everywhere in just about any large city in the world. Here in the good,

old U-S-of-A, our smartphones, tablets, computers, cars, voice assistants and cameras are watching and listening to you.

 

We don’t know what is more troubling — that these devices keep track of us or that we shrug our shoulders and say, “Oh well?” That attitude of surrender

may stem from an overwhelming sense of helplessness. ”

Technology is everywhere.

Why fight it?”

 

Truth is, it’s not a fight. It’s a series of tap-or-click settings, which we’ll walk you through.

 

You can take control of what your devices hear and record, and it’s not that hard. We have 6 ways to help you turn off and tune out Alexa, Siri, and Google,

as well as smartphones, third-party apps, tablets, and computers.

 

How to stop Alexa from listening to you

 

Weeks after the public discovered that Alexa, and by extension Echo devices

are always listening,

Amazon announced a

new Alexa feature that’s already available.

It allows you to command the voice assistant to delete recent commands. Just say, “Alexa, delete everything I said today.”

 

Sounds great, but there’s still the problems of Alexa always listening and your old recordings. Let’s tackle the old recordings first. Unless the delete

command is expanded to include all recordings, you still have to remove old files manually. Here’s what to do:

 

list of 4 items

  1. Open the Alexa app and go into the “Settings” section.
  2. Select “History” and you’ll see a list of all the entries.
  3. Select an entry and tap the Delete button.
  4. If you want to delete all the recordings with a single click, you must visit the “Manage Your Content and Devices” page at amazon.com/mycd.

list end

 

As for Alexa and Echo devices always listening, well you could turn off each of the devices, but then what’s the point of having them? The real issue is

that we discovered Amazon employees around the world are listening to us and making transcriptions.

 

Here’s how to stop that:

 

list of 7 items

  1. Open the Alexa app on your phone.
  2. Tap the menu button on the top left of the screen.
  3. Select “Settings” then “Alexa Account.”
  4. Choose “Alexa Privacy.”
  5. Select “Manage how your data improves Alexa.”
  6. Turn off the toggle next to “Help Develop New Features.”
  7. Turn off the toggle next to your name under “Use Messages to Improve Transcriptions.”

list end

 

For extra privacy, there’s also a way to mute the Echo’s mics. To turn the Echo’s mic off, press the microphone’s off/on button at the top of the device.

Whenever this button is red, the mic is off. To reactivate it, just press the button again and it will turn blue.

 

How to stop Siri from recording what you say

 

Alexa isn’t the only nosey assistant. Don’t forget the ones on your iPhones and Androids. On your iPhone,

“Hey Siri” is always on

waiting to receive your command to call someone or send a text message, etc. Apple says your iPhone’s mic is always on as it waits for the “Hey Siri”

command, but swears it is not recording.

 

If it still makes you nervous, you don’t have to disable Siri completely to stop the “Hey Siri” feature. On your iPhone, go to Settings >> Siri & Search >>

toggle off “Listen for Hey Siri.”

 

Note: “Hey Siri” only works for iPhone 6s or later. iPhone 6 or earlier has to be plugged in for the “Hey Siri” wake phrase to work.

 

How to delete your recordings from Google Assistant

 

Google Assistant has the

“OK Google” wake-up call,

but the company introduced the My Account tool that lets you access your recordings and delete them if you want. You can also tell Google to stop recording

your voice for good.

 

Here’s how to turn off the “OK Google” wake phrase: On Android, go to Settings >> Google >> Search & Now >> Voice and turn “Ok Google” detection off.

 

How to control third-party apps that record you

 

Even if you do all these steps for your Apple and Android devices, third-party apps you download could have their own listening feature. Case in point:

Facebook (although it denies it. But it’s still a good practice to check to see if third-party apps are listening).

 

Here’s how to stop Facebook from listening to you:

 

If you are an iPhone user, go to Settings >> Facebook >> slide the toggle next to Microphone to the left so it turns from green to white.

 

Or, you can go to Settings >> Privacy >> Microphone >> look for Facebook and slide the toggle next to it to the left to turn off the mic. You can toggle

the mic on and off for other apps this way, too.

 

For Android users go to Settings >> Applications >> Application Manager >> look for Facebook >> Permissions >> turn off the mic.

 

Tricks to disable screen recorders on tablets

 

Certain Apple iPads have the phone’s “Hey Siri” wake-up command feature. They are the 2nd-gen 12.9-inch iPad Pro and the 9.7-inch iPad Pro. Other iPad

and iPad Touch models have to be plugged in for the “Hey Siri” wake phrase to work.

 

The bad news for privacy seekers is that iPads come with a screen recording feature that also records audio.  It may pose issues in terms of both privacy

and security.

 

You can disable the screen recording feature through another feature, “Screen Time”:

 

list of 4 items

  1. Open the Settings app, and then tap Screen Time. On the Screen Time panel, tap “Content & Privacy Settings.”
  2. Tap “Content Restrictions.” If you don’t see this option, turn on the switch next to “Content & Privacy Restrictions” to unhide it.
  3. Under “Game Center,” tap “Screen Recording.”
  4. Tap “Don’t Allow” and then exit the Settings app. The screen recording control should no longer work, even if it is enabled within the Control Center.

list end

 

Screen Time is available in iOS 12 and above. If you are still using iOS 11 or iOS 10 on your iPhone or iPad, the above steps can be found under Settings

>> General >> Restrictions.

 

Android tablets also can record video and audio. However, you have to use a third-party app to disable the camera.

 

On your Android device, go to the Play Store, then download and install the app called “Cameraless.”

 

list of 5 items

  1. Once installed, launch the app from your app drawer.
  2. On the app’s main menu, tap the option for “Camera Manager On/Off.” By default, the camera manager is set to “Off,” so you need to enable the app first

as one of your device administrators before you can switch it “On.”

  1. Once your camera manager is “On,” just tap the option for “Disable camera” then wait until the notice disappears on your screen.
  2. Once you’re done, just close the app then go to your tablet’s camera icon.
  3. If successfully disabled, you’ll immediately get a notice that your device camera has been disabled due to security policy violations. This is the notice

that you’ll get from the “Cameraless” app. If you click “OK” you’ll be taken back to your home screen.

list end

 

Desktop and laptops are watching and listening too

Computer monitor and keyboard

 

We’ve been warned for years about hackers taking control of cameras on your computer screen. No need for elaborate instructions on disabling and enabling

the camera. Just slap a sticker on it and only remove it if you have to use Skype. Sometimes the best solutions are the simplest ones.

 

Unfortunately, you do have to root around your computer a bit to turn off mics.

 

For PCs running Windows 10, the process is actually quite painless. Right-click on the “Start Button” and open “Device Manager.” In the “Device Manager”

window, expand the audio inputs and outputs section and you will see your microphone listed as one of the interfaces. Right-click on “Microphone” and select

“Disable.” You’re done.

 

For Macs, there are two methods depending on how old your operating system is. For Macs with newer operating systems:

 

list of 5 items

  1. Launch “System Preferences” from the Apple menu in the upper left corner.
  2. Click on the “Sound” preference panel.
  3. Click on the “Input” tab.
  4. Drag the “Input volume” slider all the way to the left so it can’t pick up any sound.
  5. Close “System Preferences.”

list end

 

If you have an older operating system, use this method:

 

list of 5 items

  1. Launch the “System Preferences.”
  2. Click on “Sound.”
  3. Click on the “Input” tab.
  4. Select “Line-in.”
  5. Close System Preferences

list end

 

Now you know how to take control of your devices and how they listen and record you. It’s a pretty simple way to get your privacy back, at least some of

it.

 

Stop Facebook’s targeted advertising by changing your account settings

 

Let me be frank: I only keep a Facebook account to engage with listeners of my national radio show. I don’t use my personal account. I stepped away from

the social media platform, and I never looked back.

 

Click here to read more about Facebook advertising.

 

Please share this information with everyone. Just click on any of the social media buttons on the side.

 

list of 14 items

  • Fraud/Security/Privacy
  • Alexa
  • Amazon
  • Android
  • Apple
  • Echo
  • Facebook
  • Google
  • iPad
  • Mac
  • PC
  • Privacy
  • Security
  • Siri

list end

 

_._,_._,_

Groups.io Links:

You receive all messages sent to this group.

View/Reply Online (#18797) | Reply To Group | Reply To Sender | Mute This Topic | New Topic

Your Subscription | Contact Group Owner | Unsubscribe [albert.gtt@ccbnational.net]

_._,_._,_

 

Government of Canada investing in teaching digital skills to Canadians who need them most, CNIB Foundation

*Note: This program is only available to British Columbia and Nova Scotia residents.

Government of Canada investing in teaching digital skills to Canadians who need them most

Author:

Date Written: May 20, 2019 at 5:00 PM

Date Saved: 5/28/19, 2:19 PM

Source: https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/government-of-canada-investing-in-teaching-digital-skills-to-canadians-who-need-them-most0.html

News release

Canadians needing fundamental digital skills training to benefit from this investment Digital skills widen Canadians’ access to a world of possibilities. All Canadians should have the necessary skills to get online by using computers, mobile devices and the Internet safely and effectively. That is why the Government is putting in place initiatives to ensure no one is left behind as the world transitions to a digital economy.

Today, the Honourable Joyce Murray, President of the Treasury Board and Minister of Digital Government, on behalf of the Honourable Navdeep Bains, Minister of Innovation, Science and Economic Development, announced an investment of $1.3 million in the Canadian National Institute for the Blind’s (CNIB) Connecting with Technology initiative. This initiative will deliver fundamental digital literacy skills training to participants in British Columbia and across the country.

CNIB’s Connecting with Technology initiative will be targeted at seniors who are blind or partially sighted. This initiative will reach about 750 participants, providing them with training in digital literacy and offering required assistive technologies.

This investment is being provided through the Digital Literacy Exchange program, a $29.5-million program that supports digital skills training for those known to be most at risk of being left behind by the rapid pace of digital technology adoption: seniors, people with disabilities, newcomers to Canada, Indigenous peoples, low-income Canadians, and individuals living in northern and rural communities.

The program aligns with the Government’s Innovation and Skills Plan, a multi-year strategy to create good jobs and ensure Canadians have the skills to succeed.

End of article.

 

 

Voice Dream Scanner: A New Kind of OCR by Bill Holton, AccessWorld

Voice Dream Scanner: A New Kind of OCR | AccessWorld
Author Bill Holton
9-11 minutes

Bill Holton
There is a new player in the optical character recognition (OCR) space, and it comes from an old friend: Winston Chen, the developer of Voice Dream Reader and Voice Dream Writer, both of which we’ve reviewed in past issues of AccessWorld. In this article we’ll start out with a brief conversation with Chen. Then we’ll take a look at the developer’s latest offering: Voice Dream Scanner. Spoiler alert—it will probably be the best $5.99 you’ll ever spend on a text recognition app!
AccessWorld readers who use their phones to audibly read e-Pub books, PDFs or Bookshare titles are likely already familiar with Voice Dream Reader. It works so well with VoiceOver and TalkBack, it’s hard to believe it wasn’t developed specifically for the access market. But according to Chen, “I just wanted to build a pocket reader I could use to store all my books and files so I could listen to them on the go. No one was more surprised than me when I began receiving feedback from dyslexic and blind users describing how helpful Voice Dream Reader was for their needs and making some simple suggestions to improve the app’s accessibility.”
Chen’s second offering, Voice Dream Writer, was also directed at the mainstream market. “Sometimes it’s easier to proofread your document by listening to it instead of simply rereading the text,” says Chen. At the time, Apple’s VoiceOver cut and paste features and other block text manipulation capabilities were,shall we say, not quite what they are today? The innovative way Chen handled these functions made Voice Dream Writer equally useful to users with visual impairments.
Reinventing the OCR Engine
“I’ve been wanting to add OCR to Voice Dream Reader for a few years now,” says Chen. “It would be useful for reading protected PDF’s and handouts and memos from school and work.”
The hurdle Chen kept encountering was finding a useable OCR engine. “There are some free, open source engines, but they don’t work well enough for my purposes,” he says. “The ones that do work well are quite expensive, either as a one-time license purchase with each app sold or with ongoing pay-by-the-use options. Either of these would have raised the price I have to charge too much for my value proposition.”
Last year, however, Chen began experimenting with Apple’s artificial intelligence (AI), called Vision Framework, that’s built into the latest iOS versions, along with Google’s Tesseract, TensorFlow Lite, and ML Kit.
“Instead of using a single standard OCR engine, I combined the best aspects of each of these freely available tools, and I was pleasantly surprised by the results.”
Instead of making OCR a Voice Dream Reader feature, Chen decided to incorporate his discovery into a separate app called Voice Dream Scanner. “I considered turning it into an in-app purchase, only there are a lot of schools that use Reader and they aren’t allowed to make in-app purchases,” he says. As to why he didn’t simply make it a new Reader feature, he smiles, “I do have a family to feed.”
Chen has been careful to integrate the new Voice Dream Scanner functionality into VD Reader, however. For example, if you load a protected PDF file into the app and open it, the Documents tab now offers a recognition feature. You can now also add to your Voice Dream Reader Library not only from Dropbox, Google Drive, and other sources, including Bookshare, but using your device’s camera as well.
To take advantage of this integration you’ll need both Voice Dream Reader and Voice Dream Scanner. Both can be purchased from the iOS App Store. VD Reader is also available for Android, but currently VD Scanner is iOS only.
Of course you don’t have to have VD Reader to enjoy the benefits of the new Voice Dream Scanner.
A Voice Dream Scanner Snapshot
The app installs quickly and easily, and displays with the icon name “Scanner” on your iOS device. Aim the camera toward a page of text. The app displays a real-time video image preview which is also the “Capture Image” button. Double tap this button, the camera clicks, and the image is converted to text almost immediately. You are placed on the “Play” button, give a quick double tap and the text is spoken using either a purchased VD Reader voice or your chosen iOS voice. Note: You can instruct Scanner to speak recognized text automatically in the Settings Menu.
From the very first beta version of this app I tested, I was amazed by the speed and accuracy of the recognition. The app is amazingly forgiving as far as camera position and lighting. Envelopes read the return addresses, postmarks and addresses. Entire pages of text voiced without a single mistake. Scanner even did an excellent job with a bag of potato chips, even after it was crumpled and uncrumpled several times. Despite the fact there is no OCR engine to download, and the recognition is done locally, a network connection is not required. I used the app with equal success even with Airplane mode turned on.
After each scan you are offered the choice to swipe left once to reach the Discard button, twice to reach the Save button. Note: the VoiceOver two-finger scrub gesture also deletes the current text.
Scanner does not save your work automatically. You have the choice to save it as a text file, a PDF, or to send it directly to Voice Dream Reader. You probably wouldn’t send a single page to Reader, but the app comes with a batch mode. Use this mode to scan several pages at once and then save them together: perfect for that 10-page print report your boss dropped on your desk, or maybe the short story a creative writing classmate passed out for review.
Other Scanner features of interest to those with visual impairments are edge detection and a beta version of auto capture.
Edge detection plays a tone that grows increasingly steady until all four edges are visible, at which time it becomes a solid tone. Auto-capture does just that, but since the AI currently detects any number of squares where there is no text this feature is only available in beta. However, if you’re using a scanner stand it will move along quite nicely, nearly as fast as you can rearrange the pages.
You can also import an image to be recognized. Unfortunately, as of now, this feature is limited to pictures in your photo library. There is currently no way to send an e-mail or file image to Scanner. Look for this to change in an upcoming version.
The benefits of Voice Dream Scanner are by no means limited to the blindness community. Chen developed the app to be used as a pocket player for documents and other printed material he wishes to scan and keep. Low vision users can do the same, then use either iOS magnification or another text-magnification app to review documents. It doesn’t matter in which direction the material is scanned. Even upside-down documents are saved right-side up. Performance is improved by the “Image Enhancement” feature, which attempts to locate the edges of scanned documents and save them more or less as pages.
The Bottom Line
I never thought I’d see the day when I would move KNFB-Reader off my iPhone’s Home screen. Microsoft’s Seeing AI gave it a good run for its money and until now I kept them both on my Home screen. But I have now moved KNFB-Reader to a back screen and given that honored spot to Voice Dream Scanner.
Most of my phone scanning is done when I sort through the mail. Seeing AI’s “Short Text” feature does a decent job helping me sort out which envelopes to keep and which to toss into my hardware recycle bin. But Scanner is just as accurate as any OCR-engine based app, and so quick, the confirmation announcement of the Play button often voices after the scanned document has begun to read.
This is the initial release. Chen himself says there is still work to be done. “Column recognition is not yet what I hope it will be,” he says. “I’d also like to improve auto-capture and maybe offer users the choice to use the volume buttons to initiate a scan.
Stay tuned.
This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.
Comment on this article.
Related articles:
• Envision AI and Seeing AI: Two Multi-Purpose Recognition Apps by Janet Ingber
• An Evaluation of OrCam MyEye 2.0 by Jamie Pauls
More by this author:
• Letters, We Get Letters: Receiving Digital Scans of Your Mail Envelopes Using Informed Delivery
• A Look at the New Narrator, Microsoft’s Built-In Windows Screen Reader
Share Share on Facebook Share on Twitter

Getting the Job Done with Assistive Technology: It May Be Easier Than You Think, AccessWorld

Getting the Job Done with Assistive Technology: It May Be Easier Than You Think | AccessWorld
afb.org

Getting the Job Done with Assistive Technology: It May Be Easier Than You Think | AccessWorld
Author Jamie Pauls
10-12 minutes
——————————————————————————–

main region
article
Jamie Pauls

I remember getting my first computer back in the early 90s almost like it was yesterday. A friend of mine was receiving regular treatments from a massage
therapist who happened to be blind. My friend mentioned that this gentleman used a computer with a screen reader. I was vaguely aware that this technology
existed, but I never really considered using a computer myself until that first conversation I had with my friend. I began doing some research, and eventually
purchased my first computer with a screen reader and one program included. I’m sure there were a few other programs on that computer, but WordPerfect is
the only one I recall today. The vendor from whom I purchased the computer came to my home, helped me get the computer up and running, and gave me about
a half-hour of training on how to use the thing. A few books from what is now
Learning Ally
as well as the
National Library Service for the Blind and Physically Handicapped
along with some really late nights were what truly started me on my journey. I sought guidance from a few sighted friends who were more than willing to
help, but didn’t have any knowledge about assistive technology. There were times when I thought I had wasted a lot of money and time, but I eventually
grew to truly enjoy using my computer.

I eventually became aware of a whole community of blind people who used assistive technology. They all had their preferred screen reader, and most people
used only one. Screen readers cost a lot of money and hardware-based speech synthesizers increased the cost of owning assistive tech. Unless the user was
willing to learn how to write configuration files that made their screen reader work with specific programs they wanted or needed to use, it was important
to find out what computer software worked best with one’s chosen screen reader. I eventually outgrew that first screen reader, and spent money to switch
to others as I learned about them. I have no idea how much money I spent on technology in those early years, and that is probably for the best!

Fast forward 25 years or so, and the landscape is totally different. I have a primary desktop PC and a couple laptop computers all running Windows 10.
I have one paid screen reader—JAWS for Windows from
Vispero
—and I use two free screen-reading solutions—NVDA, from
NVAccess
and Microsoft’s built-in screen reader called Narrator.

I also have a MacBook Pro running the latest version of Apple’s Mac operating system that comes with the free VoiceOver screen reader built in. I have
access to my wife’s iPad if I need to use it, and I own an iPhone 8 Plus. These devices also run VoiceOver. Finally, I own a BrailleNote Touch Plus,
HumanWare’s
Android-based notetaker designed especially for the blind.

Gone are the days when I must limit myself to only one screen reader and one program to get a task accomplished. If a website isn’t behaving well using
JAWS and Google’s Chrome browser, I might try the same site using the Firefox browser. If I don’t like the way JAWS is presenting text to me on that website,
maybe I’ll switch to NVDA. If the desktop version of a website is too cluttered for my liking, I’ll often try the mobile version using either Safari on
my iPhone, or Chrome on my BrailleNote Touch.

The lines between desktop application and Internet site have blurred to the point that I honestly don’t think about it much anymore. It is often possible
to use either a computer or a mobile device to conduct banking and purchase goods.

So what makes all this added flexibility and increased choice possible, anyway? In many cases, the actual hardware in use is less expensive than it used
to be, although admittedly products such as the BrailleNote Touch are still on the high end of the price spectrum. Along with the availability of more
screen readers and magnification solutions than ever before, the cost of most of these solutions has come down greatly. Even companies like Vispero that
still sell a screen reader that can cost over a thousand dollars if purchased outright are now offering software-as-a-service options that allow you to
pay a yearly fee that provides the latest version of their software complete with updates for as long as you keep your subscription active.

While some may not consider free options such as NVDA or Narrator to be as powerful and flexible as JAWS, they will be perfectly adequate for other people
who aren’t using a computer on the job complete with specialized software that requires customized screen reader applications to make it work properly.
There are those who will rightly point out that free isn’t really free. You are in fact purchasing the screen reader when you buy a new computer as is
the case with VoiceOver on the Mac. While this may be true, the shock to the pocketbook may not be as noticeable as it would be if you had to plunk down
another thousand bucks or so for assistive technology after you had just purchased a new computer.

In addition to the advancements in screen reading technology along with the reduced cost of these products, app and website developers are becoming increasingly
educated about the needs of the blind community. I once spoke with a game developer who told me that he played one of his games using VoiceOver on the
iPhone for six weeks so he could really get a feel for how the game behaved when played by a blind person. Rather than throwing up their hands in frustration
and venting on social media about how sighted developers don’t care about the needs of blind people, many in the blind community are respectfully reaching
out to developers, educating them about the needs of those who use assistive technology, and giving them well-deserved recognition on social media when
they produce a product that is usable by blind and sighted people alike. Also, companies like Microsoft and Apple work to ensure that their screen readers
work with the company’s own including Safari and Microsoft Edge. Google and Amazon continue to make strides in the area of accessibility as well. Better
design and standards make it more likely that multiple screen readers will work well in an increasing number of online and offline scenarios.

You may be someone who is currently comfortable using only one screen reader with one web browser and just a few recommended programs on your computer.
You may be thinking that everything you have just read in this article sounds great, but you may be wondering how to actually apply any of it in your life.
First, I would say that if you are happy with your current technology then don’t feel intimidated by someone else who uses other solutions. That said,
I would urge you to keep your screen reading technology up to date as far as is possible. Also, make sure that you are using an Internet browser that is
fully supported by the websites you frequently visit. This will ensure that your experience is as fulfilling as it should be. For example, though Microsoft
Internet Explorer has been a recommended browser for many years for those using screen access technology due to its accessibility, it is no longer receiving
feature updates from Microsoft, and therefore many modern websites will not display properly when viewed using it.

If you think you would like to try new applications and possibly different assistive technology solutions but you don’t know where to start, keep reading.

Back when I first started using a computer, I knew of very few resources to which I could turn in order to gain skills in using assistive technology. Today,
there are many ebooks, tutorials, webinars, podcasts, and even paid individual training services available for anyone who wishes to expand their knowledge
of computers and the like. One excellent resource that has been referenced many times in past issues of AccessWorld is
Mystic Access,
where you can obtain almost every kind of training mentioned in the previous sentences. Another resource you may recognize is the
National Braille Press,
which has published many books that provide guidance on using various types of technology. Books from National Braille Press can generally be purchased
in both braille or in electronic formats.

There are also many online communities of people with vision loss who use a specific technology. Two of the most well known are
AppleVis
for users of iOS devices and the
Eyes-Free Google Group
for users of the Android platform. Both communities are places where new and long time users of these platforms can go to find assistance getting started
with the technology or for help troubleshooting issues they may encounter.

While I vividly recall my first experiences as a novice computer user, it is almost impossible for me to imagine actually going back to those days. Today,
the landscape is rich and the possibilities are endless for anyone who wishes to join their sighted counterparts in using today’s technology. While there
are still many hurdles to jump, I am confident that things will only continue to improve as we move forward.

So fear not, intrepid adventurer. Let’s explore this exciting world together. In the meantime, happy computing!

This article is made possible in part by generous funding from the James H. and Alice Teubert Charitable Trust, Huntington, West Virginia.

Comment on this article.

Related articles:

list of 2 items
• Looking Back on 20 Years of Assistive Technology: Where We’ve Been and How Far and Fast We’ve Come
by Bill Holton
• Getting the Most out of Sighted Computer Assistance: How to Help the Helpers
by Bill Holton
list end

More by this author:

list of 2 items
• Pinterest Takes Steps Toward Accessibility
• A Review of “Stress Less, Browse Happy: Your Guide to More Easily and Effectively Navigating the Internet with a Screen Reader,” an audio tutorial from
Mystic Access
list end

Share
Share on Facebook
Share on Twitter
article end
main region end

 

Alt-texts: The Ultimate Guide by Daniel Göransson

Alt-texts: The Ultimate Guide

Author: Daniel Göransson

Date Written: Oct 14, 2017 at 5:00 PM

Date Saved: 5/14/19, 1:43 PM

This post contains everything you need to know about alt-texts! When to use them and how to perfectly craft them. By me, Daniel, a web developer with vision impairment who use a screen reader in my day-to-day life.

 

My experience of images on the web

I use a combination of magnification and screen reader when surfing the web. As a rule of thumb, I use magnification on larger screens and a screen reader on smaller devices.

I, like everyone else, come across many images when surfing the web. If I’m using a screen reader I depend on getting a description of the image – the alt-text – read to me.

Many times the alt-text is not helpful, often even a waste of my time because it doesn’t convey any meaning.

Let me illustrate this on The Verge’s startpage. This is what it looks like for sighted people:

 

Below is what I see. I’ve replaced the images with what my screen reader reads:

 

Not very useful, huh?

Here are some common alt-text-fails I come across:

  • “cropped_img32_900px.png” or “1521591232.jpg” – the file names, probably because the image has no alt-attribute.
  • “” – on every image in the article, probably for improving search ranking (SEO).
  • “Photographer: Emma Lee” – probably because the editor doesn’t know what an alt-text is for.

Alt-texts are not always this bad, but there’s usually a lot to improve upon. So whether you are a complete beginner or want to take your “game” to the next level, here’s our ultimate guide to alt-texts!

What is an alt-text

An alt-text is a description of an image that’s shown to people who for some reason can’t see the image. Among others, alt-texts help:

  • people with little or no vision
  • people who have turned off images to save data
  • search engines

The first group – people with little or no vision – is arguably the one that benefits most from alt-texts. They use something called a screen reader to navigate the web. A screen reader transforms visual information to speech or braille. To do this accurately, your website’s images need to have alt-texts.

Alt-texts are super important! So important that the Web Content Accessibility Guidelines (WCAG) have alt-texts as their very first guideline:

All non-text content that is presented to the user has a text alternative that serves the equivalent purpose.
– WCAG guideline 1.1.1

How do I add an alt-text?

In html, an alt-text is an attribute in an image element:

HTML

Most content management systems (CMS), like WordPress, let you create the alt-text when you upload an image:

 

The field is usually named “Alt-text”, “Alternative text” or “Alt”, but in some interfaces it’s called “Image description” or something similar.

Let’s create the perfect alt-text!

Here are the steps to crafting fabulous alt-texts!

It might sound obvious, but an alt-text should describe the image. For example:
“Group of people on a train station.”
“Happy baby playing in a sand box.”
“Five people in line at a supermarket.”

Things that do not belong in an alt-text are:

  • The name of the photographer. This is very common, but makes absolutely no sense.
  • Keywords for search engine optimization. Don’t cram alt-text with irrelevant words you’re hoping to rank high on Google with. That’s not what alt-texts are for and it will confuse your users.

Content of the alt-text depends on context

How you describe the image depends on its context. Let me give you an example:

 

If this image was featured in an article about photography, the alt-text could be something along the lines of:

“Close up, greyscale photograph of man outside, face in focus, unfocused background.”

If the image is on a website about a TV-series, an appropriate alt-text could be completely different:

“Star of the show, Adam Lee, looking strained outside in the rain.”

So write an alt-text that is as meaningful as possible for the user in the context they’re in.

Keep it concise

Reading the previous section, you might be thinking to yourself: “I, as a sighted user, can see many details in the image, like who it is, how it’s photographed, type of jacket, approximate age of the guy and more. Why not write a detailed, long alt-text so a user with visual impairment gets as much information as I do?”

Glad you asked!

Well frankly, you can also get the necessary information from the image at a glance, and that’s what we’re trying to achieve for users with screen readers as well. Give the necessary information in the alt-text, but make it as short and concise as possible.

One of the few times you should write long alt-texts is when you’re describing an image containing important text. Ideally, you should not have images of text, but sometimes you need to. Like on some screenshots or photos of signs.

But the general rule of thumb is to keep it concise and avoid a verbose experience.

Don’t say it’s an image

Don’t start alt-texts with “Image of”, “Photo of” or similar. The screen reader will add that by default. So if you write “Image of” in an alt-text, a screen reader will say “Image Image of…” when the user focuses on the image. Not very pleasant.

One thing you can do is end the alt-text by stating if it’s a special type of image, like an illustration.

“Dog jumping through a hoop. Illustration.”

End with a period.

End the alt-text with a period. This will make screen readers pause a bit after the last word in the alt-text, which creates a more pleasant reading experience for the user.

Don’t use the title-attribute

Many interfaces have a field for adding title-texts to images close to where you can add an alt-text. Skip the title text! Nobody uses them – they don’t work on touch screens and on desktop they require that the user hovers for a while over an image, which nobody does. Also, adding a title-text makes some screen readers both read the title-text and the alt-text, which becomes redundant. So just don’t add a title-text.

When not to use an alt-text

In most cases you should use an alt-text for images, but there are some exceptions where you should leave the alt-text blank. Important note: never remove the alt-attribute, that would mean breaking the html-standard. But you are allowed to set it to an empty string, that is: alt=””. Do that in the following cases.

Repeated images in feeds

Pretend you’re scrolling through your Twitter feed. Everytime you want to read a new tweet, you first have to listen to “Profile picture of user ”. In my opinion, that would be super annoying!

Other examples of feeds are:

  • A list of links to articles. Like the one on our page Articles.
  • Chat or messaging feeds
  • Feeds of comments

So for an ideal user experience, leave the alt-text blank for images that are used repeatedly in feeds.

Icons with text labels

You should always have text labels next to icons. Assuming you do, the icon should not have an alt-text. Let me explain why!

Let’s take a social media button as an example:

 

If you would write an alt text to the Facebook icon, a screen reader would say something along the line: “Facebook Facebook.” Very redundant!

OK, this is technically not about alt-texts but still important: make sure both the icon and the link text are in the same link-attribute, to get a smooth experience. Like this:
HTML

  

  Facebook

 

Another common mistake with icons is on menu buttons:

 

If the menu button has no visual text label – which, by the way, is really bad for the user experience – then it needs an alt-text (or another way of describing its function in code, like aria-label). Explain the icon’s function, like “Menu”. Don’t write “Three horizontal lines” or “Main hamburger”, which sadly are real examples I’ve stumbled on.

If the menu icon has a label, you should leave the alt-text blank. I often find menu buttons which are read as “Menu menu”. Once I even came across “Hamburger menu menu”. Somewhat confusing wouldn’t you say?

Images in links

Usually an image within a link is accompanied by a link text. Like in the example below:

 

In this case, the image and the link should be in the same link-tag in the html. In this case, you can just leave the alt-text blank. The important thing for the user is to hear the link text. An alt-text of the image would only distract by adding information that the user will not find necessary. The image is probably found on the page that is linked, and then you can give a good explanation of it in the alt-text.

If you really, really have to have an image in a link without an accompanying text, then the alt-text should describe the link destination, not the image.

Preferably, decorative images that do not convey any meaning to the user should be placed as background images in css. It probably goes without saying, but this means you don’t need alt-texts on them at all.

I’d classify most images that you place text on as decorative. You don’t need an alt-text on them. One example is the background image on Netflix’s startpage:

 

Special cases

Logos in the banner

Logos in the banner almost always link to the websites start page. The opinions vary a bit on the topic of alt-texts for logos.

Some say it should include the company name, the fact that it is a logo and the destination of the link. Like such:

“Axess Lab, logotype, go to start page.”

In my opinion, this is a bit verbose. Too much noise! Since my screen reader already tells me it’s an image and a link, I only feel I need to hear the company name. From the fact it’s an image I assume it’s a logo and from the fact it’s a link I assume it follow conventions and links to the start page.

Svg

Scalable vector graphics (svg) is an image format that’s becoming more and more popular on the web. And I love them! They keep their sharpness while zooming and take up less space so websites load faster.

There are a two main ways of adding an svg to an html-page.

  1. Inside an img-element. In that case, just add an alt-text as usual:
HTML
  2. Using an svg-tag. If you use this method, you can’t add an alt-attribute because there’s no support for that. However you can get around this by adding two wai-aria attributes: role=”img” and aria-label=”.

Actually, for the second case, you’re supposed to be able to add your alt-text as a title-element in the svg, but there is not enough support for that from browsers and assistive technologies at the moment.

Can’t a machine do this for me?

Although machine learning and artificial intelligence is improving quickly and can describe some images quite accurately, they are not good enough at understanding the relevant context at the moment. On top of that, machines are not good enough at deciding what is “concise”, and will often describe too much or too little of the image.

Facebook has actually built in a feature that describes images automatically. But I feel like the descriptions are usually too general. One image in my feed right now is described as: “Cat indoors”. The actual photo shows a cat hunting a toy mouse.

So I’m sorry, you still have to write alt-texts yourself!

Thanks for making the web better!

I’m happy you read this far! It means you care about making the web a better place for all users. Spread the knowledge and keep being awesome!

Get notified when we write new stuff

About once a month we write an article about accessibility or usability, that’s just as awesome as this one (#HumbleBrag)!

Get notified by following us on Twitter @AxessLab or Facebook.

Or simply drop your email below!

 

 

 

NaviLens for iOS and Android: The cutting edge technology for the visually impaired

NaviLens for iOS and Android: The cutting edge technology for the visually impaired

Date Saved: 5/13/19, 10:44 AM

Source: http://www.navilens.com/

 

Maximum autonomy for the visually impaired

 

Unlike other markers, such as the well-known QR codes, NaviLens has a powerful algorithm based on Computer Vision capable of detecting multiple markers at great distances in milliseconds, even in full motion without the need of focusing. It is a cost-effective solution with minimum maintenance required.

 

The application is based on a novel system of artificial markers, which combines high density (multitude of combinations) with long range (a 20cm wide marker is detected up to 12 meters away).

In addition, the detection algorithm could read multiple markers at the same time, at high speed and even in full motion.

Discover the interface

100% user friendly interface for the visually impaired

 

See for yourself, YouTube testimonials!

This is how NaviLens can help the visually impaired. Below discover the testimonials of the first users

 

Underground

Ticket machine

Signs

Bus stop

Press

Awards

 

Subscribe to our Newsletter

You will receive the latest updates. We won’t spam you, we promise 🙂 NaviLens is a new integral system of artificial markers based on Computer Vision. It allows the user to read a special tag, displayed in their environment, from a great distance; it also assists in orienting the user toward the tag as well as obtains detailed information associated with that tag in particular in the same way that traditional signs would be read by a person with full visual capacity. To do this, the marker recognition algorithm is complemented by a novel 3D sound system that, without the need for headphones, informs the user of the position, distance, and orientation of the marker. It allows a visually impaired person to navigate in unfamiliar territory with complete autonomy in the same manner a person without a visual impairment could.

 

How to use NaviLens from YouTube:

Published on Dec 28, 2018

NaviLens, an app that makes it easier for visual impaired people to access information through QR codes of colors, has a new functionality available for users to download tags for their own personal use. Until now these tags were available in public spaces such as train stations. In this new functionality, the codes provided are blank for users to record any information about the objects in their environment. The developers have created tags of different sizes that can be adjusted to the needs of remote reading. In addition, they are printable and easily separated.

 

Category

Science & Technology

 

Study on the use of remote, video-based assistance

The following is a message from Envision Research Institute and Wichita State University faculty member Vinod Namboori: 

We are conducting a study on the use of remote, video-based assistance by blind and visually impaired (BVI) individuals. We want to survey BVI individuals who have used mobile apps like Facetime, Skype, BeMyEyes, AIRA to receive remote, video-based assistance from a sighted person. Results of this study will help us understand how mobile apps might best offer remote, video-based sighted assistance to BVI individuals in overcoming challenges faced in performing routine tasks. 

If you are someone with blindness or low vision, and have received remote, video-based assistance in the past, we invite you to complete an anonymous short survey on your experience and preferences. Completing this survey should not take more than 15 minutes using a computer, tablet, or smartphone with reliable Internet connection. Link to survey: https://forms.gle/33NmDtFptnYTVyy26

If you have any questions, please contact Vinod Namboori at vinod.namboodiri@wichita.edu  

 

Re-post: Orbit Reader 20 Removed from APH Catalog

Orbit Reader 20 Removed from APH Catalog
Author: APH Blogger
Date Written: Apr 3, 2019 at 5:00 PM
Date Saved: 4/5/19, 12:44 PM
Source: http://www.fredshead.info/2019/04/orbit-reader-20-removed-from-aph-catalog.html

Photo of the Orbit Reader 20 on a white background.
After months of ongoing negotiations between the Transforming Braille Group (of which APH is a member) and Orbit Research (the manufacturer of the Orbit Reader 20), American Printing House has removed the Orbit Reader 20 from its catalog and shopping site. This comes after discussions have stalled regarding the terms of distribution to TBG partners. The global nonprofits that make up the TBG collaborate as a group to purchase Orbit Reader 20s as part of an effort to keep costs low.
“Working with the TBG, APH has negotiated in good faith for many months, balancing the needs of our customers and organization, our interest in driving a low-cost braille market, and our valuable partnerships with TBG members,” says APH President Craig Meador. “Despite our best efforts, we have not found alignment on the issues at hand. APH must now move forward, and focus our energies on our mission to support students with braille literacy and adults in their independence.”
The Orbit Reader 20 started with a question: “how do we make refreshable braille more affordable?” To that end leaders in the field of blindness from around the world, including APH, gathered to create the Transforming Braille Group. Creating low cost refreshable braille is a difficult task, and there were a lot of setbacks throughout the process. Thankfully the effort had an impact.
“APH was proud to be the company that stood up to be the first to bring this ground-breaking technology to market,” says Meador, “It was all worth it to be an innovator, and show that we could bring prices down. That part worked. We now have competition in the low-cost braille market that wasn’t happening five years ago. Sometimes you have to take a risk – that’s what we did.”
The drop in prices created more access by showing what can be possible. For example, the National Library Service has announced they plan to offer free refreshable braille devices to their readers in the coming years.
APH will continue its efforts to support low cost braille. “Braille cells cost a lot of money to manufacture, and the demand isn’t high enough to drive that price down – we’ll keep trying. Although it’s not an easy journey, we believe everyone who needs braille should have access to it.”
APH and the TBG are continuing to negotiate with Orbit Research in hopes that a resolution can be found. In the meantime, APH is looking at other possible low-cost refreshable braille options to include in its catalog. They will complement new premium refreshable braille devices built for students and educational use now and soon available from APH through a partnership with HumanWare.
Orbit Research is expected to honor the warranty and continue repairs for already purchased Orbit Readers. Any requests for repairs should continue to come through APH. Supporting documentation, like the Orbit User Guide and user videos, will remain available to customers who have purchased an Orbit Reader from APH.

Repost: Google Inbox was the Gmail we desperately needed — but now it’s dead

Google Inbox was the Gmail we desperately needed — but now it’s dead
Author: Jackson Ryan
Date Written: Apr 2, 2019 at 10:10 PM
Date Saved: 4/3/19, 8:46 AM
Source: https://www.cnet.com/news/google-inbox-was-the-gmail-we-desperately-needed-but-now-its-dead/#ftag=CAD0610abe0f
Google Inbox, the much-loved, experimental email client that launched in 2014, is officially dead. And I am officially heartbroken.
I knew this was going to happen. We all did. It still hurts.
Google announced that Inbox’s time was up on Sept. 12, 2018, writing in a blog post the company was shutting it down and “planning to focus solely on Gmail.” Over the past two weeks, incessant warnings have popped up on the desktop and across my phone screen whenever I opened the app.
“This app will be going away in 5 days” it would tell me like a passive-aggressive Doomsday Clock. Each time, it would ask me to switch to Gmail and I’d wave it away with a push: “Not now.”
But it’s all over. This morning, I got this message:

Screenshot by Jackson Ryan/CNET via Google
Gmail was unleashed on the world 15 years ago on April 1 and is now used by around 1.5 billion people every day. It allowed the search engine provider to reach lofty new heights, giving it the confidence to take over the world. When it rolled into town in 2004, it slowly began swallowing up every email client in its path.
AOL Mail? More like LOL Mail. Hotmail? More like… cold mail. Yahoo? Bye.
Slowly we all became engulfed by the email version of The Blob. Email became monotonous, slinking into the shadows, filling up with spam and social media blasts. It gradually became normal. It became boring.
Then in 2014, Google announced Inbox and email was Great Again. It Marie Kondo’d my online life before I even knew who Marie Kondo was. When Sarah Mitroff reviewed Inbox in October 2014, she laid all manner of compliments on the app: “Visually appealing”, “equal parts colorful, clean and cheerful” and “fresh”. Gmail felt like a harsh, sterile hospital next to Inbox’s bright, buoyant Happy-Time-Fun-Land.
Now Inbox is dead, Google has said it will be bringing some of the service’s most popular features over to Gmail. As I’ve finally been forced to switch over, there’s a hole in my heart. Gmail still lacks many of the features that made Inbox so powerful — and so beloved.
There’s work to do to make email Great Again, Again. What can Gmail do to ease the pain?
(, but let’s pretend we can answer that question anyway.)
Bundle of joy
When you read about Inbox’s premature demise, you will no doubt read plenty about “bundles”. Inbox’s clever bundling system was the best thing to ever happen to me, a nearly 30-year-old unmarried man with zero children in a stable, loving relationship.
Inbox had that galaxy-brain energy. The real BDE. Supported by Google’s powerful algorithms, Inbox was able to sort your life out for you. It saw what was dropping in your Inbox and automatically filed it away in its own category via the voodoo magic of machine learning.
It was powerful for bundling all your receipts, purchases, holidays and business trips, placing all that information in easy-to-navigate, simple-to-find locations. I never even had to think about manually labeling or filing emails with Inbox — it just worked, from Day One. And it continued to work until it was dead.
Finding details about a trip home took seconds in Inbox, a one-click process that returned my booking, accommodation, the car I’d hired and any tours I’d booked while I was away. In Gmail, I have to sift through a torrent of banking statements, receipts, a regretful order I made for Thai food when I was sloshed three nights ago and a random PR email about their genius April Fools’ Day stunt.
There have been rumblings that Google will also be bringing bundles across to Gmail, though a timeline for that update is currently unknown so, thanks, big G — my life is now a living hell.
This is how you remind me
Besides bundles, Inbox quickly became the place where I started my day because it centralized my to-do list.
Email is, essentially, just a place where tasks get filed and Inbox’s “Reminders” feature was critical to this. In the same way you would compose an email, you could set yourself a reminder that would jump to the top of your Inbox. At the end of a busy day, I’d whip a few little reminders in for the following morning.
And sure, I can do this with Gmail’s “Tasks” integration but this opens an entirely new window on the side of my desktop. That’s a game of hide-and-seek that I don’t want to play. Because reminders were able to be pinned or snoozed, they were unobtrusive, nesting neatly within the inbox like a digital post-it note.
I don’t know why Gmail doesn’t have reminders. I can’t tell you why. They exist in other G suite services, like Calendar and Keep, but not in Gmail.
Inbox is like the Carly Rae Jepsen of email. It swept in and took the world by surprise with its spark and smarts and brightness and now, every waking moment without it is torture. Gmail, in contrast, is the Nickelback of modern email clients. It’s the homogenized radio-rock version of email.
In fact, maybe it’s worse. Maybe it’s Smash Mouth.
G’mourning
Attention spans are being obliterated by the internet and my apartment is a disorganized mess.
I mean, it’s tidy — but there’s no rhyme or reason to how I file away important tax documents, receipts or mementos. Invoking the holy name of Kondo, I tried to improve my systems a month ago. That amounted to buying more boxes and storing more things in those boxes.
I couldn’t organize myself in the real world, but with the power of machine learning and AI, Google Inbox made sure I could do it when I was inside the internet.
And I wasn’t alone.
Search for Google Inbox on Twitter and you’ll find tales of woe and misery. You’ll find users decrying the switch to Gmail. You’ll find them celebrating the life of an email service as if it were their own flesh and blood. Like the untimely deaths at Game of Thrones’ Red Wedding, we’re all watching on in horror at the injustice.
No one is celebrating. Everybody’s mourning.
New world order
But it’s all over.
Inbox was so good because it was so easy. It was . It was . It bundled emails together long before Gmail was doing anything of the sort. It felt like it was made for me and only me. I didn’t have to spend mornings sifting through mountains of internet text. I could get what I needed and get on with life.
It was also a calming, soft blue rather than an alarming, CHECK-YOUR-EMAIL-NOW red. That’s a fact that gets lost in this funeral. Even the logo is an open letter with a positive, life-affirming tick, rather than the closed, menacing red “M” made famous in Gmail.
I could go on and on, but I digress.
Google has slowly integrated some of Inbox’s best features into Gmail. Snoozing emails, smart replies and nudges to remind you to follow up on your to-do list were all pioneered in Inbox. On Gmail’s 15th birthday, it even brought in a host of new features, like enabling emails to be scheduled and sent at a later time and improving its Smart Compose feature, which offers suggestions to make writing email a lot faster.
I’m holding on as long as possible. The mobile version of Inbox is now six feet under, taking its place in the Google Graveyard next to Reader, Hangouts, Google Plus and Allo, but the desktop version of Inbox lives on (at least, for now). Inbox clones are popping up, aiming to make the transition period easier, but its fate is sealed.
I can do without Hangouts or Plus. Somehow, I even survived after the transition away from Reader.
But this one really stings.

Resources: Google Photos Will Now Automatically Detect Your Documents by Paul Monckton, Forbes.com

Google Photos Will Now Automatically Detect Your Documents

Author: Paul Monckton

Date Written: Mar 30, 2019 at 8:00 AM

Date Saved: 3/30/19, 11:01 PM

Source: https://www.forbes.com/sites/paulmonckton/2019/03/30/google-photos-will-now-automatically-detect-your-documents/

Smartphone cameras are useful for a lot more than selfies and landscapes; they also make very handy portable document scanners. Now Google Photos has launched a new feature designed specifically to make your documents look more presentable and legible.

 

Google’s new Crop and Adjust feature takes care of photographed documents and receipts Documents, unlike people or places, are designed to be read rather than admired and this usually requires an entirely different approach when it comes to processing them and making them look their best. This often involves using functions such as rotating, cropping, sharpening and perhaps converting them to black and white for maximum readability.

The new “Crop and Adjust” feature in Google Photos will detect any photographed documents and suggest suitable edits such as those listed above which can then be implemented automatically in a single tap.

The result is a correctly-rotated document with the background removed and any text made as clear as possible.

Google Photos users will find the Crop and Adjust rolling out soon on iOS and Android.

If you find this function useful, then it’s worth checking out the ‘Scan’ function built into the Google Drive app. The app provides a similar set of automatic enhancements to the new Google Photos function, with the added facility of saving your documents directly to your Google Drive as a PDF rather than a jpeg. Android users can also place a Google Scan widget for one-touch access to the document scanning function.

 

 

Resources: Breaking barriers: accessibility at home a costly process, by Blair Crawford, Ottawa Citizen

Breaking barriers: accessibility at home a costly process

Author: Blair Crawford

Date Written: Mar 29, 2019 at 5:00 PM

Date Saved: 3/30/19, 9:34 PM

Source: https://ottawacitizen.com/news/local-news/ottawa-firm-specializes-in-accessibilty-renovations

 

Jennifer and Eli Glanz with daughter Emilia in the master bathroom they had modified to accommodate Jenifer’s wheelchair.

It’s just a few centimetres high, but the sill of the sliding glass door that leads to the back deck of her Barrhaven home is a mountain to Jennifer Glanz.

“It’s little, but I can’t get over it,” said Glanz, who has multiple sclerosis and uses a wheelchair. Glanz and her husband, Eli, have already installed a $4,000 electric lift in their garage so that Jennifer can get out of the house, and recently completed a renovation to make their bathroom barrier free.

They moved with their daughter Emelia, 3 1/2, to a bungalow a few years ago when Jennifer’s deteriorating condition made it impossible for her to manage the stairs in their former two-storey home. The small ramp over the door sill is the next item on their reno list for summer — “if we ever get a summer,” Jennifer jokes.

“It’s the next project. And a ramp down to the grass. Emilia will be playing on the grass this summer and it would be nice to be there with her.”

Whether it’s a senior who wants to age in place in her own home, a person battling a debilitating illness, or someone injured in a sudden, catastrophic tragedy like the Westboro OC Transpo bus crash, those facing disability find that barriers abound in the home. In fact, 22 per cent Canadians live with some sort of physical disability, according to Statistics Canada.

Story continues below

“The older you get, the more likely you are to have a disability,” says Patrick Curran, national executive director of Independent Living Canada, a national non-profit agency that advocates for those living with disabilities and promotes independent living.

“And if you live long enough, you will have a disability.”

Many of the modifications needed to make a home accessible are obvious: a wheelchair ramp to the front door, for example. Others aren’t so apparent.

“One item that’s a really big, especially for someone with head injuries, is lighting,” said Sean MacGinnis, co-founder BuildAble, an Ottawa company that specializes in building and renovating homes for accessibility. “You want lighting that won’t put a strain on your eyes. Or if it’s for someone who has a visual impairment, better lighting will eliminate shadows and help them see any changes in elevation in their home.”

MacGinnis founded BuildAble five years ago with partner Kyla Cullain, a registered nurse. The company works closely with their clients’ medical teams — their family doctor or occupational therapist, for example — to develop an appropriate construction plan, he said.

“We started the company out focusing on people who are aging in place, but we’ve found the majority of our clients are people who have had a medical crisis, MS or a stroke or something like that … and we do have a lot of people who’ve been in vehicle accidents too. They’re in mid-life and they want to stay in their homes or they have family that they don’t want to move.”

For Eli and Jennifer Glanz, that meant redoing their bathroom to make it accessible. BuildAble installed a barrier free bathroom that Jennifer can roll up to and swing herself into a spare wheelchair that stays in the shower. The tile floor slopes gently to a drain and a waterproof barrier under the entire bathroom floor means spills or floods cause no damage.

The old sink and vanity was replaced with a “floating sink” that lets Jennifer wheel up to it like a desk. Three heavy-duty handrails give support and stability at the toilet.

“For the longest time we had a standard tub and shower that you see in most showers. Jennifer can’t transfer herself into a standard tub, even if there’s a shower seat. It would be me physically lifting her up and into the tub. That was hard for both of us,” Eli said.

“She keeps reminding me, I only have one back.”

“It brought more independence to me,” Jennifer said. “Before, I would have to have him home and helping me have a shower. Now I don’t. He doesn’t know how many times I shower.”

It cost $15,000 to renovate the bathroom, about 80 per cent of which was paid for with grants from March of Dimes. The family had to cover the cost of the garage lift on their own.

Another clever addition are offset hinges that allow doors to swing completely out of the way, adding a crucial extra five centimetres width to the doorway for Jennifer’s chair to pass.

The simplest and most common modification to a home is to add grab bars and handrails, MacGinnis said, including railings on both sides of a staircase. In the kitchen, countertops and cabinets can be made to lower to wheelchair level, while full-extension drawers are easier to access without awkward reaching.

One of BuildAble’s biggest jobs was to add a full elevator to a home for a man with Parkinson’s Disease, he said.

The cost can vary widely. The cost of home modifications are often included in the insurance payout for accident victims or — as in the case of an Ottawa Public servant who is suing the city for $6.3 million for injuries in the Westboro bus crash — part of the lawsuit claim. Others are helped with the cost through grants from the March of Dimes and other charities or through tax breaks.

“There’s a lot of low-cost things we can do that have a high impact,” MacGinnis said. A grab bar might cost $100. A second staircase railing $1,000. A wooden ramp to the door can range from $500 to $5,000, while a more aesthetically pleasing ramp of interlocking brick could cost $15,000 to $20,000.

A barrier-free bathroom costs between $12,000 and $15,000 while a full reno to make a kitchen full accessible can run up to $30,000, he said.

In Ontario, someone who has suffered catastrophic injuries in a car crash is eligible for $1 million in under the province’s the province’s Statutory Accident Benefit Schedule. But for non-catastrophic injuries, that benefit is capped at $65,000 and will only last five years, said lawyer Najma Rashid, a partner in Howard Yegendorf & Associates.

“Just because someone’s injuries aren’t catastrophic, doesn’t mean they’re not serious,” Rashid said. “Many people with serious injuries might be stuck with that $65,000 and it’s only available for five years so they have to make a judgment call as to whether they’re going to use part of the money for changes to their home or for ongoing treatment needs.”

Additional costs could become part of a lawsuit claim, she said. Lawyers would work with their clients medical team or hire an occupational therapist or consultant to determine what renovations are needed and their cost.

“And if they do claim it in a lawsuit, they have to wait for that lawsuit to be over. Or self fund it and look for a reimbursement, but most people don’t have the money to pay for it themselves.”

Those looking for more information on improving accessibility will be able to find it Independent Living Canada’s AccessABLE Technology Expo on May 30 at the Ottawa Conference and Events Centre on Coventry Road. The one-day expo will bring together 20 exhibitors with a broad range of products for disabilities such as visual or hearing loss, cognitive impairment and mental health issues. Admission is free, Curran said.

“We’re doing this to build awareness for Independent Living Canada,” Curran said. “But we also want to give to hope to people who have disabilities — to show them that there are people out there doing research and introducing new products that will be of interest to them.”

For more information, visit ilcanada.ca

Twitter.com/getBAC

Trending Videos

 

 

 

Resources: Bonjour, Alexa! How Amazon’s virtual assistant learned to speak Canadian French, by Morgan Lowrie, The Star

Bonjour, Alexa! How Amazon’s virtual assistant learned to speak Canadian French

Author: Morgan Lowrie

Date Written: Mar 30, 2019 at 5:00 PM

Date Saved: 3/31/19, 9:40 PM

Source: https://www.thestar.com/news/canada/2019/03/31/bonjour-alexa-how-amazons-virtual-assistant-learned-to-speak-canadian-french.html

MONTREAL—Last September, Hans Laroche embarked on an unusual teaching assignment. He and a few thousand fellow Quebecers were enlisted to help Amazon’s virtual assistant Alexa learn the finer points of Canadian French, from the distinctive accent to so-called “joual” expressions and the linguistic mishmash known as “Franglais.”

With Amazon’s official release of its French Canadian language option for Alexa on March 21, the results are now available for all to hear.

 

With Amazon’s official release of its French Canadian language option for Alexa on March 21, the results are now available for all to hear.

Because Alexa’s algorithm requires a great deal of data, Laroche says he and his fellow testers were given a free Echo device and asked to interact with it on a regular basis by asking it questions, getting it to perform household tasks or using it to play music, audiobooks or news. Every week or two, they were asked to provide feedback to developers, who worked to further refine the algorithm and its language capabilities.

Laroche, who runs a Facebook page for Quebec Alexa enthusiasts from his home near Victoriaville, Que., said he was impressed with how well the device picked up on his requests.

“It was pretty surprising the things Alexa can understand, especially in Canadian French,” he said. “The French language from France has been available for a while, but it’s not the same as the language Quebecers use.”

Read more:

Amazon and Google are harvesting data in your home by demanding smart-home gadget makers to share it As an example, he said Quebecers tend to use English verbs such as “check” or “cancel” rather than their French counterparts, “verifier” or “annuler.”

“If Alexa is in (European) French and I ask it to ‘cancel le timer,’ it won’t understand,” he said. “But if I’m in Canadian French and I say it, it will understand what I’m saying.”

Laroche noted that Amazon still has some catching up to do, since competitors such as Google Assistant already have French Canadian language support.

Nicolas Maynard, the man in charge of Alexa in Canada, said teaching the virtual assistant to understand French was a difficult challenge, due to the complexity of the language and the prevalence of homonyms, contractions, and a vocabulary that differs widely by region.

Adapting it to a French-Canadian audience meant ensuring it would understand commands delivered using local colloquialisms and pronunciations, he said in a phone interview from Seattle.

Maynard said that while French speakers in France use as many, or possibly more, English words than their North American linguistic counterparts, the inflection is very different.

“The pronunciation of English words in Quebec is much closer to the English pronunciation than in France,” said Maynard.

“If you ask a French person to say the name of an American song, you’ll clearly hear the French accent. But if you ask a Canadian (francophone), you’ll get a pronunciation that is very close to English.”

But while Alexa may understand local slang, its own voice was given an accent designed to be as neutral as possible while still being that of a Quebecer.

“I think it’s more or less a Montreal accent, but you’ll tell me,” Maynard said.

He said it was also important to ensure the voice service is equipped with general knowledge from each region by being able to answer basic questions about politics and culture.

As a result, Alexa can recite the poem “Le vaisseau d’or” by celebrated Quebec writer Emile Nelligan, and has a repertoire of jokes to tell on demand.

Laroche said he has noted a lot of improvement in this department since he first began interacting with the device.

“If you ask who is Montreal’s mayor, who is the prime minister of Canada, it knows the answer, which was not the case in the beginning,” he said.

He says the voice assistant is still not perfect, however, and there are still many times when it answers a question with “Je ne sais pas” (I don’t know.) But he’s still pleased to have a product that will start his coffee maker in the morning and turn on the equipment in his home gym when he announces he’s ready for a workout.

Guillaume Dufour, the founder of enthusiast group Alexa Quebec, was also an early user of the experimental “beta” version.

He was impressed with Alexa’s ability to understand mixed-language commands, such as when he asks it in French to play an English-language song. He said the virtual assistant understands his normal accent perfectly, although he sometimes has to repeat himself when he tries out the stronger accent of his native Charlevoix region.

“We can see that Amazon’s language recognition training was excellent,” said Dufour, an IT expert and programmer who also creates “skills” for the devices.

And he would know, having amassed an impressive collection of voice-activated assistants including four Echo devices, a Google Home, Apple HomePod and a Harman Kardon Invoke.

Dufour said he has noticed only one true “glitch” — the device sometimes delivers the weather report in a jumble of English and French — but he has found that some of Alexa’s jokes are told “in a slightly jerky intonation that does not quite follow the rhythm of the French language.”

As for Maynard, he said Alexa’s education is far from complete.

He won’t say how many Quebecers are currently using Echo or other Alexa devices, but he says the virtual assistant’s artificial intelligence-driven algorithm will continue to absorb new data and refine its capabilities the more it is used.

“I see the launch as just the beginning of my job,” he said.