Hi everyone! I have a new student for my Robotics class, and he is Visually Impaired. We are going to start learning RobotC soon, but I’m a little worried about the interface and it’s compatibility with screen readers. Has anyone gone through this before? I would greatly appreciate your help!
This is a fascinating and important topic. It’s at least somewhat difficult to solve. And it shouldn’t be as hard as it currently is. We’re a smart, capable group of people. This is a problem worth solving.
First, I have not tried RobotC with any accessibility tools, but I will look into doing that.
Second, Even if RobotC doesn’t play nice with a screen reader, there are still ways to do what is needed. I usually edit code outside the RobotC environment, while simultaneously having the code open in RobotC. After editing, I force RobotC to refresh, and compile and download. A similar setup might work for an unsighted programmer.
Taking on the task of making this work would be a fantastic Community Outreach activity for a VEX Team aspiring for Excellence. And it would truly help someone.
What none of us knows yet is how the new development environment, to be released next year, will handle important accessibility issues.
Does the VI student have any computer experience already? Are there particular reader tools he/she currently uses?
Finally, here is some really interesting writing by and about a blind programmer. Not much of it is directly applicable, but it at least points out that it is possible.
This student is very technologically inclined. He loves everything to do with technology and wants to be a computer scientist. He currently has access to NVDA and JAWS, but I know from trail and error, that JAWS, even though it’s expensive, is not very good. For example, it does not read basic PDFs very well, if at all.
Editing code outside the RobotC environment is something I will try with him, however if there are errors, he will not be able to know what needs to be fixed. He is very independent and wants to be able to code on his own. Definitely an engineer in the making.
I am also interested in learning if anyone has any experience with screen readers in the other programs? For example, PROS. Is that a viable option?
This is something I’ve been wondering about with VRC itself. I’ve worked with students with visual impairments and physical impairments. It seems like VRC isn’t very adaptive to accommodate these. For example, if a student only has one highly functional arm/hand, the standard VEX joystick isn’t very useful, but we have to use it. I expect most aids for visual impairment would be fine for use to observe the robot on the competition field.
He can certainly be a computer scientist. Deep thinking, math ability, logic, and problem solving are more important than pretty screens in that profession.
I’m currently taking a deep dive into accessibility issues for some work projects, so it’s good to hear about the experience with NVDA andJAWS.
I’m suggesting an integration that would scrape the errors out of the dialog as well, so they too could be read.
PROS can be developed in any IDE, or no IDE. As can Convex. The tools in the build chain behind the scenes in those environments are well suited to accessibility tools and customized interfaces. @jpearman, care to weigh in?
EDIT:
I just noticed this question, so I’ll address it specifically:
Unfortunately, it seems that accessibility within Atom is not quite up to par.
Apparently, VS Code is a little more promising in that respect (though it may also be suboptimal). One of our alumni has been working to develop a package for VS code when he has time, and I’ll let him know to look into building out accessibility features, if there are any.
As @kypyro mentioned, we do support writing code in any editor through our CLI, which manages project building and uploading. You might be able to get away with using an accessible editor, then using the CLI, but as noted, you might run into problems when trying to fix errors. That being said, if a screen reader (or another tool) works with the command prompt (I’m assuming a Windows environment), then you might be okay on that front too.
Overall, it seems that adding functional accessibility features to text editors in general might be more of a deep-cutting concern than it first appears, as some participants of the issue I linked above mention.
By the way, I have to thank @mgarza for posting about this.
Personally, I have never given the idea of accessibility in software much thought before today, and frankly I think this extends to much of the field of computer science/software engineering.
Even in terms of browsing the web, there doesn’t seem to be much assistance aside from the Accessible Rich Internet Applications (ARIA) specification, and even that is either misused, or, in my experience, not implemented by web developers at all, which leaves the browser and/or screen reader to guess (presumably poorly) what is going on.
Fair disclosure: I may be way off-base-- as I have said, I haven’t given this issue much thought before now. Maybe the technologies Just Work™. I’m inclined to think that the functionality is less than ideal, mainly from my experience, and a few pieces of (anecdotal) evidence I have seen around.
In conclusion:
The software development industry as a whole could make and should be making a better effort at accommodating all users.