WHY CART in Courts/Justice

November 28, 2010 Comments Off on WHY CART in Courts/Justice

CCAC: This gives us a wonderful first hand report of the experience of a CART professional. We can all imagine how important the “language of real time text” was for the person who required it. The CCAC aims to add first hand reports from “consumers” also. Send us your story via email to ccacaptioning@gmail.com

As I am awaiting a verdict in what would normally be an “average” vehicular manslaughter trial, I wanted to share the many interesting stumbling blocks that arose. The defendant in this five-day trial is profoundly hard of hearing. I was called in and hired by the Superior Court as a “realtime interpreter” to provide accessibility for the defendant during his trial. The official reporter proceeded with her duties, as it would be impossible to have done both, which I will explain later. I was fortunate to have a wonderful courthouse staff to work with in this small town of Cochise County in Bisbee, AZ, about 1.5 hours from my home in Tucson.

Before the trial proceeded, the Judge, Bailiff, Official Reporter and I had a meeting to discuss the many situations that could arise and we wanted to have answers or alternatives beforehand. With a charge of this degree, I certainly didn’t want to be a point on appeal, nor did the judge. As I explained to the Judge, my job and goal was to provide complete access of the trial proceedings to the defendant.

On the record at the beginning of trial, I was sworn in as a realtime interpreter to both counsel’s agreement. Logistically, I was set up next to the defendant at all times. I made sure my wires were not in a hazardous position for others, my positioning was not hampering jurors, spectators or courtroom personnel. I also wanted to make sure my notebook computer screen wasn’t picking up any glare. I found that black lettering on a light screen worked best, although I do have a color notebook. I had larger than normal print, which also helped. I made it a point throughout the trial to check in with the defendant to make sure he was reading, understanding and comprehending the process as much as possible. The Judge also, on the record, checked periodically that the defendant was understanding, didn’t have any questions, and was proceeding without any problems.

To assure the readability of my “transcript” I used speaker identifications more often than normal, just so there was no confusion with “Q” and “A”.

At times the defendant did ask questions of me regarding terminology such as “sustained” “overruled”, I referred him to his counsel for those answers.

During breaks I was available for discussions between counsel and the defendant, their witnesses, family members, etc. This is where the official reporter could not participate.

With the help of Cynthia Reed, the official reporter in this case, there were situations she noticed that would make it impossible for her to have done both jobs.

The defendant did occasionally interrupt me during testimony asking for clarification on speaker identifications, terminology, procedures, etc.

And as the official reporter, that job is to have 100% of her attention focused on the official Verbatim Report of Proceedings. As an “interpreter” I am writing for 100% comprehension by the defendant and not so concerned about the 100% verbatim, as is the official. I have the flexibility to paraphrase if need be, depending on the comprehension of the person reading. Again, Official Reporters are 100% verbatim.

In this mode of writing, accuracy is a must; conflicts cannot exist; phonetic untranslates are helpful, and fingerspelling needs to be finely honed, awaiting the expert witnesses. I, of course, had asked in advance for any reports that had been filed, but didn’t receive any. The morning of trial, I did get the witness list, and an idea of the kind of experts that were to be called. Thankfully with many years under my belt, an accident reconstructionist isn’t the worst thing that can happen.

During our pre-trial meeting we did discuss putting another realtime setup in Judge’s chambers for motions to be held outside the presence of the jury, so as to streamline the process, instead of asking the jury to go in and out of the courtroom. That situation never arose, but at least we were prepared.

Also, logistically, if the defendant was to be called as a witness, we made arrangements for that setup, too.

There was a videotape deposition played during the trial, I had a hard copy of the deposition arranged for the defendant, he simply read along as it was played. That same situation happened when a taped interview was played to the jury, the hard copy was provided for the defendant to follow.

I made it very clear to all involved, that my realtime would be deleted on a daily basis, that no record would be kept of the realtime. As the situation would be with any other language interpreter. This eliminated the defense counsel from even asking for the privilege of seeing the day’s testimony, as he may have been tempted.

Verdict is in, I can honestly say I have never been more nervous writing for someone as I was today. The defendant literally was following my written words with his life, and happily seeing the not guilty verdict sighed heavily. I, on the other hand, held my breathe trying to keep my hands still, which was impossible.

I think the most important point I have learned from this experience is to be flexible but keep ethical considerations in mind and remembering the goal of providing access.

Now if only the situation would come up where my services would be needed by a juror and I can finally find out what goes on in a jury room!

Submitted by the author for the CCAC (from an article Deanna authored for the NCRA Journal of Court Reporting).
Deanna P. Baker, FAPR, RMR
Realtime Captioner/Consultant
Flagstaff, AZ 86004
DeannaPBaker – AOL IM


It is axiomatic that all litigants be able to understand the proceedings. If a person is unable to hear and understand, that person is unable to participate, and if unable to participate, it is a denial of due process under the Fifth and Fourteenth Amendments. See United States ex rel. Negron v. New York, 434 F.2d 386, 389 (2d Cir.1970). A litigant’s difficulty with English may impair his or her ability to communicate with counsel, to understand testimony in English, or to make himself or herself understood in English. Hearing people often assume facts to be true, which, on close examination, are not true. For example, Some believe that a litigant probably can “lip read.” But that view is the product of misinformation. As one commentator wrote, the ability to lip read is more a function of myth than fact. See Jo Anne Simon, The Use of Interpreters for the Deaf and the Legal Community’s Obligation to Comply with the A.D.A., 8 J.L. & HEALTH 155, 175-76 (1994). While many deaf people can lip read to some extent, only 25% to 40% of the English language is visible on the lips in the best of conditions. Id. at 176. It is seldom sufficient in and of itself. Id. Another commentator noted that one study found that the best lip readers (or, more preferably “speech readers”) could fully comprehend only 26% of what was said to them. Deirdre M. Smith, Confronting Silence: The Constitution, Deaf Criminal Defendants, and the Right to Interpretation during Trial, 46 ME. L.REV.. 87, 97 (1994).

Courtroom settings provide an excellent example of the limitations of speech reading. Id. at 98. The speaker may be at a distance from the deaf person and, if there are several participants in the proceeding, the speaker may be turned away from the deaf person. See id. As well, the courtroom setting and decorum eliminate many visual clues used by deaf people who speech read. Id. The use of legal terms and other words unfamiliar to lay persons can further limit understanding. Id. For example, professional jargon contains many words and phrases that would be incomprehensible to one who is speech reading. Id. This is why, even if a deaf person can find ways to communicate outside the courtroom, as the circuit court in this case alluded to, it is a stretch for the court to reason that the person can then also adequately communicate inside the courtroom. Writing notes back and forth is also an inefficient and ineffective method of communicating in the courtroom. People, hearing or deaf, tend to condense what they would say in other modes when they are writing notes, which could be extremely prejudicial in legal settings. Id. And, many deaf people have a reading level well below average. Id.

English is a second language to most deaf people who lost their hearing during childhood. FN9 Id. But it is the primary language for those who lost their hearing later in life. The most effective means of communication for later deafened or hard of hearing persons is CART. It provides the mechanism for understanding what is being communicated in the courtroom in “real” time. It is accurate and allows full participation by the person in need of this accommodation.

Rick Brown

CCAC thanks Judge Richard Brown for sharing this document with us, from a 2009 Court Case.

WHY CART in Government

November 26, 2010 Comments Off on WHY CART in Government

Why CART in Government?
1. Good government leads the way for all its citizens by setting best standards for equality and inclusion.

2. To reduce discriminatory gaps which now still exclude many able citizens (who happen to be deaf, deafened, or have a hearing loss, or who need quality text for many other good reasons) from regular and important government meetings, workshops, rallies, advisory committees, and public input to city, state, or federal bodies.

3. To set the standard high, so all sectors can share the benefits as well as the responsibilities that come with full citizenship participation. To participate means to contribute and give back.

5. To recruit and involve volunteers in local, state and national initiatives among people with different hearing needs. If these many able citizens have the tools, they will be able to contribute more than currently where resources are missing or irregular.

6. To teach about citizenship and voting responsibilities – a most essential part of government. CART and quality captioning helps all, not only people with hearing differences, but also new citizens learning a new language.

7. To establish an effective communication channel with all communities and constituencies, and reduce the mass media digital divide. CART and captions universally!

8. To educate elected and non-elected government representatives about the challenges and contributions of individuals with so many forms of hearing loss and deafness. Most do not use sign language, contrary to popular understanding. When “hearing” members of government are aware of resources for inclusion of all, they become better public servants also.

9. To lobby for further legislation and reduction of barriers to make access a truly achievable goal.


EXAMPLES INCLUDE (please send the CCAC your examples to add here soon – email to ccacaptioning@gmail.com)

Nice inclusion on Jury in Syracuse, NY: http://www.syracuse.com/news/index.ssf/2010/10/after_deaf_oswego_man_dismisse.html

CART included, July 2010, on the lawn of the White House for the 20th Anniversary celebrations of the ADA – laws for inclusion of all able citizens. Included on the video online when President Obama signed the 21st Century Telecommunications Act recently.

Examples from your state legislature?

Your town meetings? (We know that the small town of Stonington, Maine provided CART for a town meeting at one time, for inclusion of a valued community citizen.)

Candidates for elections?

And Internationally – good examples: In Ireland: CART to the Wicklow County Council (chamber and also streamed online); also CART for some committee meetings in the Houses of Parliament.


Above “Why CART…” prepared for the CCAC by Martha Galindo and:

Galindo Publicidad Inc., 6844 West Sample Road,

Coral Springs, Florida 33067, U.S.A.

Blog: Translations And More

Attachments (0)
Comments (0)

Why CART in…Healthcare

November 25, 2010 Comments Off on Why CART in…Healthcare

Why CART in Health Care?
Communicating with your physician or any healthcare provider is always vital, and sometimes also a matter of life and death. Could there be a better reason for full verbatim real time text (CART) for those who require it? Even if it’s a “routine” check-up, one “usual” follow-up visit, a first meeting with a new provider, a conference to help care for a loved one, an emergency room visit, or a health education video handed to you for cancer treatment, CART or captions will serve thousands if not millions. Why? 37 million is the current estimate of people with deafness or hearing loss in the USA alone. Not all need CART. Some use hearing aids or other listening devices for full speech comprehension. (Keep in mind many hearing aids wind up in drawers, never to be seen again, because hearing aids do not cure hearing loss, and are uncomfortable for many; the result is that some deny any hearing loss and learn to “bluff” extremely well). CART is a universally appropriate language (in whatever language you use) for all who can read. It is used by people who are deaf also (though some prefer sign language). CART provides an easy record (transcript) of what is said for best health and for proper treatments. While talking with your provider, while you or the nation is paying for the best healthcare one hopes to find, it’s essential not to miss a word.

The CCAC website has a number of videos on the “Articles and Resources” page to illustrate what CART is, http://www.ccacapioning.org. If you believe the cost of CART or captioning is too much, consider what the patient deserves, and consider doing no harm.

The ADA law applies to communication access, just as it applies to wheelchair access. There are ways to budget for CART (on site or remote delivery of full verbatim text), and for a 30 minute consultation, or even more, the cost is fully manageable. There is a “learning curve” to this. Afterwards, for an estimated five percent of the population who requires this for inclusion, you will both benefit from the quality of care delivered.

Below is a partial list of Hospitals and Medical Offices that offer CART:
Mass General Hospital, Boston, MA

Captel phones needed in Canada

November 25, 2010 Comments Off on Captel phones needed in Canada

While some of us think RCC would be even better, we are lucky to have Captel in many states (automated speech to text via voice recognition). E.G.

Some in Canada advocating for CAPTEL phones now, and they deserve this! If you have friends in Canada, good neighbor to USA where Captel is in most states, shout out to them to do this survey onine there – it’s good for all of us (globally, and hearing plus not hearing).
(Wondering what RCC is? really verbatime and accurate and fast! for those of us who work quickly – real professionals doing the translation real time; it’s the best and we hope all make noise for thiss too). From the Canadian Association today:

Sign CHHA’s Petition Today!
Added on 07-05-2010
Over the past few months, an important issue has surfaced which needs immediate attention:

Captioning Telephones

The Canadian Hard of Hearing Association is working hard to bring CapTel telephones to Canada. This telephone combines the convenience of a telephone with the text capabilities of the internet showing you the incoming caller’s words in text form in real time as captions on your CapTel telephone or over the internet. Your callers do not have to dial a special number to connect to the captioning service like they do with the TTY phone. They call your own number and the captions simply come up automatically on all calls incoming or outgoing. During your phone conversations text is displayed word-for-word in caption form of everything the caller says on the telephone’s built-in screen. A petition has been started to show support in bringing CapTel phones to Canada.

Download this petition and return to the Canadian Hard of Hearing Association immediately to show your support for this type of technology.

If you are interested in obtaining multiple signatures for this petition, please contact the Canadian Hard of Hearing Association (chhanational@chha.ca).

WGBH Comments to the FCC re Captioning

November 25, 2010 § 1 Comment

Public Comments – thank you WGBH from the CCAC:

Before the
Federal Communications Commission
Washington, D.C. 20554
In the Matter of )
Closed Captioning of Video ) CG Docket No. 05-231
Programming ) ET Docket No. 99-254
November 24, 2010
Submitted By:
Larry Goldberg and Marcia Brooks
WGBH National Center for Accessible Media
One Guest Street
Boston, MA 02135
The WGBH Educational Foundation’s National Center for Accessible
Media (NCAM) hereby submits comments on the Commission’s
Pleading Cycle to refresh the record in the proceeding noted above
concerning the Commission’s Closed Captioning Rules.
1. The FCC has asked for comment on whether the Commission
should establish quality standards for non-technical aspects of
closed captioning, including the accuracy of transcription, spelling,
grammar, punctuation and caption placement, what the adoption
of such standards would cost to programmers and distributors,
whether the captioning pool consists of an adequate number of
competent captioners to meet a non-technical quality standard
mandate, and whether different captioning quality standards
should apply to live and pre-recorded programming.
2. The FCC has asked for comment to refresh the record regarding
the need for mechanisms and procedures, over and above the
“pass through” rule, to prevent technical problems from occurring
and to expeditiously remedy any technical problems that do arise,
including current and proposed obligations for video programming
distributors to monitor and maintain their equipment and signal
3. The FCC has asked for additional comment on whether to
establish specific per violation forfeiture amounts for noncompliance
with the captioning rules, and if so, what those
amounts should be, and whether video programming distributors
(VPDs) should be required to file closed captioning compliance
4. Since filing comments on this proceeding on November 10, 2005,
the WGBH National Center for Accessible Media (NCAM) has
conducted significant research and development that now
advances the Commission’s ability to establish quality standards.
NCAM believes the Commission should indeed establish
standards for non-technical quality of closed captioning.
5. The WGBH Educational Foundation is one of the country’s
leading public broadcasters and has long considered one of its
central missions to be increasing access to media for people with
6. WGBH’s commitment to accessible information began in 1971
through establishment of The Caption Center, the world’s first
captioning agency, to produce captions for TV programs so that
deaf and hard-of-hearing viewers could gain equal access to those
programs. Today, The Caption Center is part of WGBH’s Media
Access Group and produces captions and subtitles for every facet
of the television and home video industry. The Media Access
Group additionally services the theatrical film industry, museums
and theme park attractions.
7. The WGBH Media Access Group also houses WGBH’s
Descriptive Video Service ® (DVS ®) which makes television
programs and movies accessible to people who are blind and
visually impaired. WGBH developed DVS in 1990 and continues to
lead the world in creating accessible electronic media for people
with disabilities.
8. The WGBH National Center for Accessible Media was founded in
1993 to build on WGBH’s knowledge base in the field of access
technologies. NCAM is a research and development facility
dedicated to addressing barriers to media and emerging
technologies for people with disabilities in their homes, schools,
workplaces, and communities.
9. These comments expand upon comments The WGBH National
Center for Accessible Media previously submitted in November
2005 on the Commission’s Notice of Proposed Rule Making
concerning the closed captioning of television programs.
Non-technical Quality Standards for Closed Captioning – The
Marketplace Has Still Not Corrected Problems
10. Caption errors continue to be pervasive, especially as the use
of Automatic Speech Recognition (ASR) – a technology not ready
to be used for real-time captioning – is becoming more common.
The lack of a common way to measure accuracy may have held
back establishment of quality requirements in the past, but with
newly developed technology created by WGBH/NCAM’s
innovators with significant input from caption users, deaf education
experts, and with measurement parameters developed by the
National Institute of Standards and Technology (NIST)1 and
National Court Reporters Association (NCRA)2, the FCC can now
set fair levels of expected performance.
11. NCAM is developing a prototype automated caption accuracy
assessment system that will identify, rank and report on the
frequency and severity of caption errors through its Caption
Accuracy Metrics project (funded by the National Institute on
Disability and Rehabilitation Research, U.S. Department of
Education, #H133G080093-10)3.
Current State of Caption Accuracy Measurement
12. Accuracy measurements are traditionally based on the model
used at the National Institute of Standards and Technology (NIST).
This approach identifies the differences between a test transcript
(in this case, a caption text file) and a clean reference transcript,
often called the “ground truth” transcript, which accurately reflects
1 NIST: http://www.nist.gov
2 NCRA: http://www.ncraonline.org/
3 Caption Accuracy Metrics project:
what was spoken. The two transcripts are aligned and errors are
categorized as:
• Substitutions – words in the test transcript that are different
from the reference transcript;
• Deletions – words that are in the reference transcript but are
omitted from the test transcript; and
• Insertions – words that are added to the test transcript but
are not in the reference transcript.
The total number of these errors is divided by the total word count of
the reference transcript to calculate a Word Error Rate. An accuracy
rate is 100% minus the error rate. Accuracy rates for most caption
text range from 85 to 95% by this measure, with lower accuracy
usually due to more extensive deletion of text.
Caption agencies have used a different approach to error reporting
for live stenocaptioning. Court reporting software used by most
captioners identifies “untranslates” – words that do not have a match
in the stenocaptioner’s dictionary. These reflect a portion of the
substitutions that would be found in the caption file but they do not
typically identify deletions or insertions. Accuracy rates for caption
text by this measure usually fall in the 97-99% range.
Quality Standards Informed by NCAM Research and
13. Technical development to date for the Caption Accuracy
Metrics project demonstrates a proof of concept that text-based
data mining and automatic speech recognition technologies can
produce meaningful data about stenocaption accuracy that meets
the need for caption performance metrics.
14. Further, it is now possible to quantify the severity of specific
caption error types and to specify the degree to which each error
type makes a caption hard to follow, using data from a national
consumer research web-based survey the Caption Accuracy
Metrics project conducted in Spring 2010 that yielded over 350
responses from caption viewers. Caption viewers were presented
with actual caption error samples representing 17 different error
types, and they ranked the severity of each error type. The survey
results provide valuable data about how to rank the severity of the
17 types of errors evaluated through this survey. The summary
consumer research report will be available in December 2010 at
the Caption Accuracy Metrics project website.
15. Combining the research and development as noted above, it is
now possible to generate an accuracy report per program that
estimates the level of caption accuracy using Automatic Speech
Recognition (ASR). This process occurs after the real-time
captioned program is broadcast and is not utilizing ASR to
generate captions.
Further Definitions of Caption Accuracy
16. NCAM developed a caption error ontology that identifies 17
caption error types sub-categorized by the major three error types
identified by the National Institute of Standards and Technology
(insertions, substitutions and deletions), and assigns a severity
ranking informed by the consumer research data. This ontology
addresses many of the questions identified by the FCC such as
spelling, grammar, and punctuation. The ontology and the severity
ranking for each error type are expanded upon in the Caption
1 0
Accuracy Metrics survey report, which notes there is a wide range
of error types in real time captioning and they are not all equal in
their impact to caption viewers. Treating all substitution and
deletion errors the same does not provide a true picture of caption
accuracy. The least offensive errors were judged to be simple
“substitutions” like the wrong tense and punctuation; however,
substituting pronouns and/or nominals for proper names were also
judged to significantly impact viewers’ understanding.
17. In September 2010, The Caption Accuracy Metrics project
convened a technical review panel consisting of many of the major
stakeholders in caption quality (including broadcast and cable
television networks, caption vendors, deaf education experts, and
the National Court Reporters Association). There was wide
consensus that each sector would fully support defined caption
quality standards, but only if there is full and equitable compliance
across the range of industry stakeholders. NCAM believes it is the
FCC’s role to define and ensure compliance with caption quality
1 1
18. NCAM believes that the Commission should include and define
caption placement requirements in its caption accuracy standards.
Through research and development NCAM conducted for its
Access to Locally-Televised Onscreen information project (funded
by the U.S. Department of Education, National Institute on
Disability Research and Rehabilitation, grant #H133G070278)4,
NCAM developed a prototype system that demonstrates the ability
to automatically resolve display conflicts between captions and onscreen
graphics. By developing methods of prioritizing text and
graphics messages within automated display systems, the system
automatically relocates closed captions so they are not obscured
by emergency information (also known as “crawls”) located on the
screen. Note that the system also automatically translates the text
in the emergency crawls to speech, for viewers who are blind or
4 Access to Locally Televised On-Screen Information
1 2
Establishment of Reporting Requirements and Non-Compliance
Forfeiture Amounts
19. The FCC has asked for additional comment on whether to
establish specific per violation forfeiture amounts for noncompliance
with the captioning rules, and if so, what those
amounts should be, and whether video programming distributors
(VPDs) should be required to file closed captioning compliance
reports. NCAM believes that the Commission should establish and
enforce VPD reporting requirements that are developed in parity
as appropriate with other existing FCC reporting requirements
where a structure to manage reporting requirements exists or has
been defined (e.g., telecommunications industry network outage
reports, etc.). Because the marketplace has not significantly
corrected caption quality problems, and because the means by
which to define and measure caption quality standards are being
established, further examination of forfeiture amounts – perhaps
tied to compliance reporting requirements – is recommended.
1 3
Cost of Adoption of New Caption Standards
20. From the Caption Accuracy Metrics technical review panel,
which represents a wide range of stakeholders in caption quality, it
is apparent that many video program distributors (VPDs) and
captioning agencies are already monitoring caption quality to
some degree, and in some cases service level agreements exist
between television networks and their caption vendors. However,
there is not a standard way to define or measure caption quality.
Many panel members agreed that an automated system of caption
quality monitoring would in many cases ultimately decrease the
cost of monitoring caption accuracy and levels of service they are
currently tracking through labor-intensive, manual means. If the
Commission indeed sets caption quality standards, all
stakeholders — VPDs and caption agencies who are already
tracking accuracy levels as well as those who do not currently
have an established means to do — so will be at an advantage,
given the likelihood of having access to an automatic system to
measure caption accuracy. The upfront costs of such a system are
yet to be determined, but are likely to ultimately be a far more
cost-effective option than manual monitoring and/or payment of
1 4
potential fines. VPDs further stand to benefit from an automatic
system that can identify caption errors such as garbling caused by
technical errors, which can help inform troubleshooting of the
transmission equipment chain. Establishment of caption quality
standards will also likely ease the significant burden on consumers
to report caption quality issues, and therefore, also ease the
burden on local television stations, the FCC and national
consumer advocacy organizations in responding to complaints
from viewers who rely on closed captioning for equal access to

Blog of interest

November 25, 2010 Comments Off on Blog of interest


a New Zealand “fille sourde” – why French? CCAC always curious!

Blog on captioning

November 25, 2010 Comments Off on Blog on captioning

http://rhianonelan.blogspot.com is another blogger – she sure writes well – wish she’d mention the CCAC!


Where Am I?

You are currently viewing the archives for November, 2010 at CCAC Blog.