PDA

View Full Version : Drumheller shark ridder and taimer



Drumheller
06-14-15, 10:52 AM
If anyone has any questions about the writer for Bard/ Drumheller, or blindness in general, feel free to ask them here. Have no fear; I will not be offended by your curiosities. The following three posts convey the following: the nature and character of screen-readers & JAWs; discussions of the disorder Retinitis pigmentosa; and lastly, my personal visual history. After reviewing this information you have questions, feel free to ask below.
100

As this has come up in Chat, I thought it fruitful to give a bit of an explanation about me and the program I use to write here. What follows is from the training manuel I am writing for the program, and in keeping with the Jaws theme, it is called SHARK (Supplemental Handicap Response & Accessibility Kit).

2.2 What is a Screen Reader?

A screen-reader is a software application that enables individuals that are blind, visually impaired, or dyslexic to use a computer. In simple terms, a screen-reader consists of two parts; the speech synthesizer which produces audible speech; and the screen-reader software itself which sends the appropriate information from the visual PC screen to the synthesizer. The speech synthesizer is the piece of equipment that produces sound from text in the form of a synthetic voice.

Synthesizers come in two different general types: an external synthesizer, one that typically is plugged into the rear of the Central Processing Unit, or tower; or a software synthesizer, which operates through the computer’s soundcard.

Screen-readers work closely with the computer’s Operating System (OS) to provide information about icons, menus, dialogue boxes, files and folders. A screen-reader provides access to the entire OS that it works with, including many common applications. Screen-readers can provide feedback to the user in two different formats, Braille and/or speech.

A screen reader uses a Text-To-Speech (TTS) engine to translate on-screen information into speech, which can be heard through earphones or speakers. A TTS may be a software application that comes bundled with the screen-reader, or it may be a hardware device that plugs into the computer.

Originally, before computers had soundcards, screen readers always used hardware TTS devices, but now that soundcards come as standard on all computers many find that a software TTS is preferable (Jones, and Habersham, 2003; Wallcaut, and Derrner, 2002). In addition to speech feedback, screen readers are also capable of providing information in Braille. An external hardware device, known as a refreshable Braille display is needed for this. A refreshable Braille display, or Braille terminal, is an electro-mechanical device for displaying Braille characters, usually by means of raising dots through holes in a flat surface. The mechanism which raises the dots uses the piezo effect of some crystals, where they expand when a voltage is applied to them (for more on the Piezo effect see Appendix A). Such a crystal is connected to a lever, which in turn raises the dot. There has to be a crystal for each dot of the display, (i.e. eight per character, or cell). A refreshable Braille display contains one or more rows of cells. Each cell can be formed into the shape of a Braille character, a series of dots that are similar to domino dots in their layout. As the information on the computer screen changes, so does the Braille characters on the display, providing refreshable information directly from the computer.

While it is possible to use either form independently, Braille output is commonly used in conjunction with speech output (Scherer, 2001).

In its “Computer Accessibility Technology Packet,” the U.S. Department of Education (1997), makes an important distinction between “talking software” applications and “screen readers,” however: an example of the first is an instructional software package that provides verbal directions for an on-screen activity or uses verbal reinforcement for correct responses, “good job,” “you are correct,” etc. In contrast, a screen reader can read all system icons, menu bars, system information, and text generated through applications. While the first is important for instructional purposes, a screen reader is essential to full computer access. (p. 39).

Since the majority of screen reader users don’t use a mouse, all screen readers use a wide variety of keyboard commands to carry out different tasks. Tasks include reading part or all of a document, navigating web pages, opening and closing files, as well as editing and listening to music. A blind or visually impaired computer user will utilize a combination of screen reader commands and operating system commands to accomplish the many tasks a computer is capable of performing. All current operating systems have their own keyboard shortcuts, which are available to everyone not just screen reader users. An example of a Microsoft Windows keyboard shortcut is using the alt + a key combination to open the favorites menu in Internet Explorer. Each screen-reader uses a different series of commands, so most individuals will tend to choose a screen-reader and utilize that specific screen-reader consistently, as the task of learning a large number of new keyboard commands is somewhat daunting. Despite this variation in screen-reader commands across different screen-readers, all screen-readers handle language and images in similar ways.

Screen readers are programmed to identify common graphics on the operating systems and common applications they work with. When a screen reader encounters a graphic that it recognizes it relays a pre-programmed piece of text back to the user, either as speech output or as Braille. For example, when a Windows based screen reader, such as JAWS, encounters a graphic it identifies, such as the My Computer icon on the desktop, it will supply the text “My Computer icon” to the user in their chosen format: speech or Braille. The difficulty arises when the screen-reader encounters an image that it cannot identify. With certain screen-readers it is possible for the user to append a label to the image themselves, although this assumes that a description of the image can be found elsewhere to begin with.

In the case of web pages, the text description appended to an image is supplied to the screen-reader user in their chosen format, which assumes that such a description has been provided by the web site developer. Providing that web pages are built using well structured code and then screen-readers are able to interact with them very easily. Well structured web pages should include headings, lists, paragraphs and quotations where appropriate, as well as tables that include relevant information about their content, images that carry an alternative text description and links that have clear link text. All of these things should be done using the computer language that the web page is written in. The reason these elements should be present in the computer language code is because a screen reader will read the code of the page and make certain key commands available.

For example, when a screen reader identifies a table on a web page, it will look for column and row headings. If they are present, this information is relayed to the user. In addition, a series of key commands is made available, that allow the table to be navigated vertically (up and down columns) or horizontally (left and right across rows).

With regards to different languages, each screen-reader has a primary language, which matches the language of the operating system. In addition they are capable of dealing with different languages within documents. For example, if a passage of text in a web page is marked up in the code as being in French, a screen reader will alter the accent, pitch and speaking rate of the synthesized speech output to mimic the style of spoken French. Most screen readers support common languages including English, American English, French, Spanish, Italian and German. Other languages, including French Canadian, Mexican Spanish, Finnish and Basic Chinese are offered by certain screen-readers, such as JAWS, but are not standard for all screen-readers.

Screen-readers are very complex, capable applications. They offer far more than mere assistance with browsing or email retrieval. A screen-reader is simply another interface, a monitor replacement, offering verbal and tactile feedback rather than visual. There are of course difficulties in using an operating system designed for visual feedback with an application that uses speech or Braille, but in the hands of a competent user a screen-reader is a powerful piece of software that can be used to carry out most, if not all, computer based tasks.

While this discussion has generally reviewed the features that characterize screen-reader programs, it might be beneficial to briefly review the history of screen-reader technology and how it has evolved over time, so that one can understand the interaction between the screen-reader program and the computer.

2.3 The history of screen-readers.

It might be helpful, in order to better understand the nature of screen-reader technology, to be familiar with the evolution of this technology over time. One can observe the history of screen-reader technologies in terms of four distinct time periods.

These include the era before the Personal Computer (PC), the period of the DOS-style PC, the period of the Graphical User Interface (GUI), and the era when Web technology became prevalent.

It should be noted, that there was a relatively continuous evolution in screen-reader technology throughout the last three of these phases in order to continue to provide access to content on the screen for blind and visually impaired users. Furthermore, while the initial goals of research and development of speech synthesis technology were not necessarily to provide what we now call screen-reader technology, it is important to look at the origins of speech synthesis and TTS, since this serves as one of the two forms that screen-readers use to present information.

At the 1939 New York World’s Fair, the first device to be considered a speech synthesizer, the VODER (Voice Operating Demonstrator) was introduced (Richardson, 2002). The VODER was a manually controlled electronic speaking instrument invented by Homer Dudley, while at Bell Laboratories. The VODER was based on its counterpart, the VOCODER, also invented by Dudley, which was built as an attempt to save early telephone circuit bandwidth by using speech compression.

The VODER was operated by using touch-sensitive keys and a foot-pedal which controlled the electronic generating components. While speech quality and intelligibility were poor and the device was difficult to operate, the VODER proved that speech could be artificially produced.

Research and development in speech synthesis technology continued over the next few decades in both academia and industry, with the invention of the first formant synthesizer, PAT (Parametric Artificial Talker), by Walter Lawrence in 1953, and the first articulatory synthesizer in 1958 by George Rosen of M.I.T., called the DAVO, which stood for Dyamic Analog of the Vocal Tract (Klatt, 1987).

The first full English text-to-speech system was developed by Noriko Umeda at the Electrotechnical Laboratory in 1968. In 1979, the MITalk text-to-speech system was developed at M.I.T., followed by the Klattalk system, which was designed by Dennis Klatt. The technology in the MITalk and Klattalk systems formed the basis for many of the Text-to-Speech systems used today (Klatt, 1987; Richardson, 2002).

While much of the research in academia was aimed at understanding speech synthesis further and not necessarily focused on a potential for blind users, certain efforts were aimed specifically toward the blind. These included the Talking Typewriter developed by IBM Research in the 1960s, the Braille Printer in 1975 and the Kurzweil Reading Machine for the blind in 1976. It should be noted, that the Kurzweil Reading Machine was considerably different from the Kurzweil 1000 and 3000 software packages mentioned earlier, this device resembled that of a rather large and bulky scanner. Ray Kurzweil, an inventor whose interests in pattern recognition encouraged him to also invent the first omni-font optical character recognition (OCR), developed this reading machine for the blind.

As the story goes, Kurzweil was “looking for a problem” for the OCR solution he’d found. He was sitting next to a blind man on an airplane who explained his needs of gaining access to various types of printed materials. Kurzweil found that there were two problems in the overall need, a flatbed scanner would be needed to scan in the books/printed materials and a full text-to-speech program would need to be developed to read the text after it was translated with OCR. He developed both of these products. The Kurzweil reading system was far too expensive for the individual consumer, with a price between $30,000 and $50,000, but they were used in libraries, schools, and service centers for the visually impaired (Leventhal, 2004; Richardson, 2002).

While all of these different forms of assistive technologies incorporated Text-to-speech technology, the real development in screen-readers, however, began in 1982 with the release of IBM computers; these provided a standard interface for connecting external devices, combined with a standard operating system and the possibility of installing much more memory with which to run more powerful software, including screen-readers. Interestingly, video screens, or monitors, were first used with computers in the early 1960s, but did not become common until the 1970s. Earlier computers used a printer rather than a monitor for output; text was a one-dimensional stream of scrolling information, which could be fed into a speech synthesizer connected between the computer and the printer. The only problem was that the user had no way to re-read text that had already been spoken (Richardson, 2002; Walker, 2003). As a result of the development of IBM’s Personal Computer, by the late 1980s several screen-readers were available on the market, including Enable Reader, Soft Vert, the Enhanced PC Talking Program, and Job Access With Speech. The importance of the introduction of the MS-DOS operating system cannot be overstated, since it provided a reliable, fully functioning framework, or platform, with which programs could be designed around. It has also been argued that the development of the MS-DOS operating system was what made the Personal Computer (PC), possible (Schroeder, 2000; Walker, 2003).

The screen-reader for the PC opened up doors for employment for the blind.
According to the National Federation of the Blind of North America, “Blind people could be found operating DOS based systems to do such jobs as order taking, word processing, customer service, accounting and more; for the first time in history, individuals that were blind could conceive of a world of equal access to the written word” (Gabias, 2005. 47).

In short, with the MS-DOS based screen-readers many blind users could take advantage of computer technology.
With DOS, speech output systems had relied on character-based computer displays. In the late 1980’s and early 90’s the widespread adoption of the Graphical User Interface (GUI), such as with the Mackintosh and Microsoft’s Windows, presented new issues for blind users.
While an amazing advancement for sighted users, the GUI, which relied on icons and graphical representations, presented blind users with new barriers to access. Previously the 256 ASCII characters were fairly easily translated into speech or Braille using the DOS screen reader. Now, the pixel-based systems were presenting an entirely new paradigm and these systems were totally inaccessible to the blind. Access to Windows for blind computer users was slow in development, but like the access for DOS, it eventually emerged.

The first screen-reader for a Windows operating system was released in 1992. It is at this juncture that one begins to observe the impact of activism on the part of the disabilities and blind community. Advocacy organizations like the National Council on Disability and state agencies that serve the blind, such as the Commissions for the Blind in Massachusetts and in Missouri, urged Microsoft to commit to accessibility efforts for Windows 95 or potentially face refusals by those states to purchase the operating system when released (Schroeder, 2000). While technology access for blind users was not mandated by law at this point, the potential threat of losing state contracts or sales to these two state governments might have had some impact.
Microsoft responded to the growing concern in the disabilities community and created a working group on accessibility. In 1993, the new director of Microsoft’s accessibility efforts, Greg Lowney, addressed the National Federation of the Blind in Computer Science, stating that "Windows has probably done more than anything else to earn Microsoft the enmity of the blind community. Microsoft has been both hated and feared by many people because we were promoting a graphical operating system without making sure that it could be used by people who are blind, and the results have been disastrous for many people" (Schroeder, 2000. 104).

Microsoft’s efforts to make future releases of Windows more accessible led to the release of Microsoft Active Accessibility (MSAA) in 1997, which exposes the user interface control objects so that assistive technologies can access them. With MSAA, screen-reader manufacturers can develop on a common platform, making the task of writing a screen-reader easier (Luishent, 2002).

As one can easily observe, this discussion of the history of screen-readers has focused predominately on Microsoft and their involvement, this is not to say that other companies have not published screen-reader technologies, and have not been involved in the issue of disability accessibility. For instance, even though Apple computers are no longer producing screen-reader technologies, this company has played an important role in the history of the development of the modern screen-reader.
Before Microsoft created an internal group focused on accessibility, Apple had started its Worldwide Disability Solutions Group (WDSG), in 1985. The group, founded by Alan Brightman, had been seen as the industry’s most innovative team in the field of accessibility (Alexander, 2001; Luishent, 2002).

Despite the relative smallness of the group, only five members, the WDSG collaborated with adaptive technology developers to try to make their existing hardware and software accessible. For a time, Apple was considered amongst the assistive technology community to be the computer of choice. Unfortunately however, as market forces began to impact Apple, Steve Jobs dismantled the group in 1998, which reportedly saved Apple one million dollars annually.

Perhaps the best summery of Apples past and current involvement in the accessibility market is provided by Gregg Vanderheiden, director of Trace Research & Development Center at the University of Wisconsin, in his statement “Apple was doing things to ensure disabled access when it wasn’t even on anyone else’s radar, but now that they don’t have people dedicated to working on the topic, they won’t be at the top anymore” (Tedeschi, 1999. 32). Given that Apple was facing economic difficulties; it is likely that Jobs believed that the financial costs outweighed the benefits to Apple at that time. Perhaps his assumption was that accessibility was not a profitable market venture and hence, did not justify the need for a dedicated team during a time of overall financial hardship within the company. While Apple did continue to provide some accessibility features in its operating systems even without this centralized team, including zoom features for the visually-impaired and limited text-to-speech applications built into the system,

one large concern amongst screen-reader developers was the lack of Operating System hooks within the first release of Apple’s OS X. These hooks, or background functions that programs such as screen-readers can access directly, are important to ease the development of third party screen-reader technology.

Microsoft on the other hand, as mentioned previously, with MSAA, had exposed the user interface control objects and provided these “hooks”, so that assistive technologies can access them in Windows. MSAA allows applications to expose screen-readers to the type, name, location, and current state of all objects and notifies the screen-readers of any Windows event that leads to a user interface change. While MSAA is not the only way for an application to communicate with assistive technology, it allows assistive technology and screen-reader developers to support a broader variety of applications without custom programming for each one (Luishent, 2002). Microsoft also offers a set of tools that allow program designers to see the information that MSAA is exposing, the MSAA Software Development Kit, making it easier for screen-reader developers to work with Windows. It is difficult to know if Microsoft assumed a large market potential by incorporating this feature, but it put them in a position of control in the accessible operating system marketplace (Alexander, 2001; Luishent, 2002).

Currently, there are about a dozen different screen-reader programs available on the open market and the two that have emerged as leaders work exclusively on the Windows operating system. This is not surprising given the ease of assistive technology development with MSAA versus the difficulties caused by the lack of hooks in Apple’s OS X.
These two screen-readers are Window-Eyes by GW
Micro Inc and JAWS (Job Access with Speech) by Freedom Scientific. The cost of each of these products is comparable, which likely contributes to their steady competition, with JAWS running about $895 and Window Eyes at $795; “the two have been running neck-and-neck for years, trying to keep up with the other’s improvements” (Mathews, 2006. 58). Another important factor in the popularity of these two programs is there relative flexibility in handling graphical interfaces, especially with respect to the internet.

The internet, on the whole, is the “final frontier” in regards to accessibility. The current issue is due to the fact that, not only does the screen-reader have to interact with a graphical operating system, but the code of the individual websites as well. More specifically, screen-readers must convert what is a two-dimensional to a one-dimensional text string, which is referred to as “linearizing” the page. In other words, the screen-reader only processes the text between HTML tags and reads some of the tag attributes including the text in “alt” tags and “title” tags, for example. When the code is sloppy, current browsers for the sighted are very forgiving of non-standard HTML; screen-readers can have a difficult time reading the page. It should be noted that while HTML (Hyper Text Market Language), one of the major programming languages, is not the only language that screen-readers can read; it is however, one of the most accessible (Mathews, and Scherer, 2006). If correctly followed, the standards set in the W3C Web Content Accessibility Guidelines ensure that one’s web site is compatible with screen-readers. However, as the old saying goes, “you can lead a horse to water, but you can’t make him drink”. While many designers know that these guidelines exist, they aren’t often mindful of the needs of blind users and of how their site reads through a screen-reader (Alexander, and Richardson, 2005).

Unlike that of the mainstream consumer market, it is difficult to follow the adoption curve of screen-readers in the same way. Cost, which is generally a crucial factor in the adoption of mainstream products, remains consistently high in the realm of assistive technologies. A smaller market and limited resources force assistive technology users to carry more of the cost burden (Richardson, 2003).

Federal regulation has had some effect on encouraging the larger software companies to make their products accessible to assistive technologies, due to the magnitude of federal purchasing power. Some claim that as people age, the market for screen-readers and standardized products will grow. If this is the case, it will be interesting to see if and how it will impact the standing of Windows Eyes and JAWS. It is also possible, although unlikely, that a secondary company might produce a more flexible and adaptable program in the near future that will steel the market. At present however, the popularity of JAWS among public institutions, such as universities and public libraries, increases the likelihood that it will be continued to be purchased by those establishments for some time (Scherer, and Mathew, 2006).

Furthermore, even though the brief history discussed thus far has presented the general progression of screen-reader technology, it has not however, provided a description of the development of JAWS a topic that will now be covered in some detail.

2.4 The history of JAWS.

As mentioned previously, JAWS (an acronym for Job Access With Speech), is a screen-reader produced by the Blind and Low Vision Group at Freedom Scientific St. Petersburg, Florida, USA. This programs purpose is to make personal computers using Microsoft Windows accessible to blind and visually impaired users. This is accomplished by providing the user with screen-based information either by means of a brail display, or through a Text-To-Speech (TTS), program and allows for comprehensive keyboard interaction with the computer.

The JAWS program was originally released in 1989 by Ted Henter, a former motorcycle racer who lost his sight in a 1978 automobile accident. In 1985, Henter, along with an investment equaling one hundred and eighty thousand dollars from Bill Joyce, founded the Henter-Joyce Corporation in St. Petersburg, Florida, where it remains to this day.

Joyce is no longer a member in the company, as he sold his interest (i.e. the part of the company he owned as a result of his investment), in the company back to Ted Henter sometime in 1990.

In April 2000, Henter-Joyce, Blazie Engineering, and Arkenstone, Inc. merged to form Freedom Scientific, which is the name most closely associated with the JAWS product (Roberts, 2008).

Given that the original version of JAWS was released in 1989, which was before the emergence of the Graphical User Interface (GUI), it is not surprising that it was designed to work with the MS-DOS operating system. Even though JAWS was one of several screen-readers designed to give blind users access to text-mode MS-DOS applications, JAWS use of cascading menus, in the style of the popular Lotus 1-2-3 application was unique at that time.

A simple example of a cascading menu can be observed in the Windows Start menu. Another feature unique to the original version of JAWS was its use of macros that allowed users to customize the user interface and work better with various applications.

One can understand a macro as a command that in turn activates a series of commands, but the user does not have to type all of them. In other words, it is a kind of computer shortcut. Ted Henter and Rex Skipper wrote the original JAWS code in the mid-1980s, releasing version 2.0 in mid-1990. Skipper left the company after the release of version 2.0, and following his departure, Charles Oppermann was hired to maintain and improve the product.

In 1992, as Microsoft Windows became more popular, Oppermann began work on a new version of JAWS. A principal design goal was not to interfere with the natural user interface of Windows and to continue to provide a strong macro facility. Test and beta versions of JAWS for Windows (JFW) were shown at conferences throughout 1993 and 1994.

During this time, developer Glen Gordon started working on the code, ultimately taking over its development when Oppermann was hired by Microsoft in November 1994. Shortly afterwards, in January 1995, JAWS for Windows 1.0 was released. Currently a new revision of JAWS for Windows is released about once a year, with minor updates in between.

Drumheller
06-15-15, 03:04 PM
About my Blindness

Retinitis pigmentosa (RP) refers to a group of inherited diseases causing retinal degeneration. The cell-rich retina lines the back inside wall of the eye. It is responsible for capturing images from the visual field. People with RP experience a gradual decline in their vision because photoreceptor cells (rods and cones) die. Forms of RP and related diseases include Usher syndrome, Leber’s congenital amaurosis, rod-cone disease, Bardet-Biedl syndrome, and Refsum disease, among others.

I specificly haverod-cone disease or dystrophy.

What we see is in fact made in the brain. The brain makes sight from signals given to it by the eyes.
What is the normal structure of the eye?
The eye is made of three parts.

A light focussing bit at the front (cornea and lens).
• A light sensitive film at the back of the eye (retina).
• A large collection of communication wires to the brain (optic nerve).

A curved window called the cornea first focuses the light. The light then passes through a hole called the pupil. A circle of muscle called the iris surrounds the pupil. The iris is the coloured part of the eye. The light is then focused onto the back of the eye by a lens. Tiny light sensitive patches (photoreceptors) cover the back of the eye. These photoreceptors collect information about the visual world.

There are two types of photoreceptors named by their shape when looked at in fine detail. They are called 'rods' and 'cones'.

Rod and cone photoreceptors are good at seeing different things
Rods are good at 'seeing':
• things that move
• in the dark
• but only in black and white
• and in less detail.
Cones are good at 'seeing':
• things that are still
• in daylight
• in colour
• and in fine detail.

The covering of rod and cone photoreceptors at the back of the eye makes a thin film called the retina. The central bit of the retina is made up of cones. They help us see the central bit of vision that we use for reading, looking at photographs and recognising faces. The area of the retina around the central bit is made up of rods. The rods see the surrounding bits of vision and help us to walk around and not bump into things especially in the dark or twilight. Each photoreceptor sends its signals down very fine wires to the brain. The wires joining each eye to the brain are called the optic nerves. The information then travels to many different special 'vision' parts of the brain. All parts of the brain and eye need to be present and working for us to see normally.

What is Rod-Cone Dystrophy?
Rod-Cone Dystrophy is the name given to a wide range of eye conditions. These eye conditions are all linked by a problem with the rod and cone photoreceptors. The photoreceptors either do not work from the day a child is born or else slowly stop working over a period of time. Dystrophy is a word for a condition which a child is born with. Some of these conditions do not only affect the eye but may also affect the rest of a child's body.




How is RP inherited?
An estimated 100,000 people in the U.S. have RP, mainly caused by gene mutations (variations) inherited from one or both parents. Mutated genes give the wrong instructions to photoreceptor cells, telling them to make an incorrect protein, or too little or too much protein. (Cells need the proper amount of particular proteins in order to function properly.) Many different gene mutations exist in RP. In Usher syndrome, for example, at least 14 disease-causing genes have been identified.

Genetic mutations can be passed from parent to offspring through one of three genetic inheritance patterns — autosomal recessive, autosomal dominant, or X-linked.

In autosomal recessive RP, both parents carry one copy of the mutated gene but have no symptoms themselves. Children have a 25 percent chance of being affected by inheriting a mutated copy from each parent.

In autosomal dominant RP, usually one parent is affected and is the only parent with a mutated gene. A child has a 50 percent chance of being affected by inheriting the mutated gene from that parent.

In families with X-linked RP, the mother carries the mutated gene, and her sons have a 50 percent chance of being affected. Daughters are carriers and aren’t usually affected. However, some daughters are affected, but with milder symptoms.

If a family member is diagnosed with RP, it is strongly advised that other members of the family also have an eye exam by a physician who is specially trained to detect and treat retinal degenerative disorders. Discussing inheritance patterns and family planning with a genetic counselor can also be useful.

If anyone has any further questions, don’t hesitate to ask, either here, by PM, or in Chat. I do not mind answering most types of questions, and will not become angry.

Bard
09-09-15, 10:12 AM
Let me now give you a little personal history. I was born with this disorder, although it has worsened over time. I was not born completely blind, but with extremely poor vision: I was light sensitive; couldn’t see colors; shapes were not well defined, even under the best of circumstances, and I only had a visual field of about fifteen degrees, far smaller than what sighted individuals have. What is more, at best, at night, I could only see large shapes at twice the length of my arm away. The brighter the light, the less I could see. To put it in actual terms my visual acuity was 20/600. Then when I turned 13, some more of my sensory receptors died, a common occurrence as the eye gets overstrainged trying to over compensate. On top of this I developed cataracts, for reasons that are still not entirely known. So I went from bad vision, to no vision at all.

Philomel
09-09-15, 10:33 AM
Thank you, Bard / Drum, for sharing this. It gives an insight into your background and in a way, to you.