![]() ![]() Beyer enumerated possibilities for handling this information deluge, including MT. For example, the Journal of Experimental Physics of the USSR alone had expanded from 1,500 to 4,500 pages per year during NSF’s program. The problem was how to broaden NSF’s program to other languages and other academic fields to cover an ever-growing range of content. For example, Beyer observed that Sputnik was not an instantaneous achievement, but it had been “foreshadowed in their literature, but this was largely unknown in the West.” Beyer noted that a language barrier, as even the tourist knows, is an effective way of discussing secrets in plain view. Ten years later, the program was generating 15,000 pages annually at a cost of $500,000, which was covered through subscriptions. He was participating in a National Science Foundation (NSF) program started in 1955 for the purpose of translating Soviet physics journals. In the January 1965 issue of Physics Today, Robert Beyer, a professor of physics at Brown University, described his experience post-editing a scientific paper from Russian into English. 1960s: First Experiences with Post-Editing So what should we expect? Is the current post-editing technology our best hope? To help answer these questions, let’s begin with a short history of post-editing before delving into more recent developments in machine-assisted translation technology. But translation memory (TM), which is essentially a deterministic MT system augmented with heuristics, is not the future either. For human translators, it often forces the user to correct erroneous output. For researchers seeking fully automatic translation, post-editing is considered more of a failure mode. Post-editing was never meant to be the future of machine translation (MT). Here are a few new approaches regarding the future of machine translation that go beyond post-editing, along with some practical tools in interactive and adaptive translation technology.
0 Comments
Leave a Reply. |