TextSL

TextSL is a command based web interface that allows visually impaired users to access the popular virtual world of Second Life using a screen reader or using built-in self voicing. The interaction mechanism of TextSL was inspired by text based adventure games which allows users to explore Second Life and interact with other avatars and virtual world objects using a natural language interface.

How to Use?

  1. You must first sign up for a Second Life account. Choose a unique first name and a last name from a list of Second Life names for your avatar. You must also provide a password, date of birth and an email adress. Make sure to remember these names as TextSL will need them to log in your avatar. To register you must pass an audio CAPTCHA test. You can sign up for a Second Life account here.
  2. Access TextSL through your browser at the following url: http://ear.textsl.org. Turn off any screen readers as TextSL is self voicing. If you access TextSL for the first time it will ask you to provide your avatars first name, last name and the password. If these are provided TextSL will store this data on your computer and you can log your avatar into Second Life by typing login
  3. Explore Second Life! Your start location will be the Virtual Ability Island - a virtual community in Second Life for users with disabilities.
  4. To get information on the commands available type in "tutorial" or "help" to get a list of commands available.

Feedback

TextSL is still in Beta version and we would be happy to receive feedback such as bug reports or suggestions for improvement. Contact us through feedback[at]textsl[dot]org or put your feedback in TextSL's Google Groups Discussion forum.

Contribute

The TextSL source code is distributed under the Gnu General Public License. Anyone interested in contributing to our project please visit TextSL's developer pages.

Developers

This application has been developed by Dave Carr, Manjari Sapre, Bei Yuan, Bugra Oktay and Eelke Folmer of the University of Nevada in Reno.

Publications

Acknowledgements

This research supported by NSF Grant IIS-0917362 and IIS-0738921
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
TextSL is copyright 2008-2012 Eelke Folmer