Предварительный анализ и перевод специального текста. Княжева Е.А - 8 стр.

UptoLike

Составители: 

8
the changes effected and how and why do they come about? Surely the most studied
communication technology in this context is literacy and its relations to orality. Most
of the research on the topic has come from linguists examining register variation.
Some have postulated a dichotomy of these modes of communication, many see them
as two extremes of a continuum (Halliday 1985, Tannen 1982) and others maintain
that they are different dimensions altogether (Shank 1993). The advent of modern
telecommunication technologies like the one described in this paper has further
blurred the lines in this contentious field. Traditional register analysis distinguished
between the written and the spoken mode of communication. More recent approaches
have abandoned these mechanical dichotomous classifications based on the medium
alone and have developed the so-called Multi-Dimensional approach (Biber 1994).
This approach is based on quantitative analysis of corpus data that can determine the
distribution patterns of distinct linguistic features. This shifts the focus away from
situational characteristics that were used earlier to distinguish registers.
Text B
Eliminating Spurious Error Messages Using Exceptions, Polymorphism, and
Higher-Order Functions.
It can be difficult to write a compiler or other language processor that issues
more than one trustworthy error message per run. If an intermediate result is wrong
because of an error in the input, the compiler may complain not only about the
original error, but about other errors that follow from its attempt to process the faulty
intermediate result. The worst offenders, like early Pascal compilers, are so bad that
users learn to disregard all but the first error message. Even modern compilers written
by smart people are not immune. One widely used C compiler assumes that
undeclared identifiers are integers; when this assumption is unreasonable the
compiler sprays error messages. Another compiler prints messages identifying faulty
inputs as «bogus», but it complains multiple times about the same bogosites. One can
eliminate spurious error messages by halting after detecting one error, but this
solution is acceptable only if the compiler is very fast. The ideal is for a compiler to
detect every real error in its input, but never to issue a spurious message.
This paper describes an implementation technique that helps compiler writers
approach the ideal; in principle a compiler using this technique can detect any error
not made undetectable by a previously detected error. Moreover, a compiler writer
need not apply this technique from the beginning; it can be retrofitted to compilers
that assume no errors occur. The technique is not needed for parsing; it helps in later
phases, like static semantic analysis and generation of intermediate code. The
technique need not be applied to even later phases, like optimization and code
generation, because these phases executed only intermediate results obtained from
valid inputs.
Text C
Overview
The 6011E and 6012E Families use E Series technology to deliver reliable data
acquisition capabilities to meet a wide range of application requirements. For a more
                                           8
the changes effected and how and why do they come about? Surely the most studied
communication technology in this context is literacy and its relations to orality. Most
of the research on the topic has come from linguists examining register variation.
Some have postulated a dichotomy of these modes of communication, many see them
as two extremes of a continuum (Halliday 1985, Tannen 1982) and others maintain
that they are different dimensions altogether (Shank 1993). The advent of modern
telecommunication technologies like the one described in this paper has further
blurred the lines in this contentious field. Traditional register analysis distinguished
between the written and the spoken mode of communication. More recent approaches
have abandoned these mechanical dichotomous classifications based on the medium
alone and have developed the so-called Multi-Dimensional approach (Biber 1994).
This approach is based on quantitative analysis of corpus data that can determine the
distribution patterns of distinct linguistic features. This shifts the focus away from
situational characteristics that were used earlier to distinguish registers.

       Text B
       Eliminating Spurious Error Messages Using Exceptions, Polymorphism, and
Higher-Order Functions.
       It can be difficult to write a compiler or other language processor that issues
more than one trustworthy error message per run. If an intermediate result is wrong
because of an error in the input, the compiler may complain not only about the
original error, but about other errors that follow from its attempt to process the faulty
intermediate result. The worst offenders, like early Pascal compilers, are so bad that
users learn to disregard all but the first error message. Even modern compilers written
by smart people are not immune. One widely used C compiler assumes that
undeclared identifiers are integers; when this assumption is unreasonable the
compiler sprays error messages. Another compiler prints messages identifying faulty
inputs as «bogus», but it complains multiple times about the same bogosites. One can
eliminate spurious error messages by halting after detecting one error, but this
solution is acceptable only if the compiler is very fast. The ideal is for a compiler to
detect every ‘real’ error in its input, but never to issue a spurious message.
       This paper describes an implementation technique that helps compiler writers
approach the ideal; in principle a compiler using this technique can detect any error
not made undetectable by a previously detected error. Moreover, a compiler writer
need not apply this technique from the beginning; it can be retrofitted to compilers
that assume no errors occur. The technique is not needed for parsing; it helps in later
phases, like static semantic analysis and generation of intermediate code. The
technique need not be applied to even later phases, like optimization and code
generation, because these phases executed only intermediate results obtained from
valid inputs.

      Text C
      Overview
      The 6011E and 6012E Families use E Series technology to deliver reliable data
acquisition capabilities to meet a wide range of application requirements. For a more