Two approaches to computational modeling
of natural language
National Polytechnic Institute
Mexico City, Mexico
The Meaning–Textmodel (MTM), i.e., the theory of natural language as a bi-directional Meaning Û Text transformer, is evaluated on the background of the Western computational linguistic paradigm. The features of similarity and distinction between the MTM and the Western mainstream in its modern state are discussed. According to its inner reasons, the Western tradition has developed many features of similarity, and thus the similarity is growing. Nevertheless, many features already developed by the MTM and absent in the Western tradition can serve to the progress of the whole field.
Key words: computational linguistics, Meaning – Text model, generative grammar, dependency syntax, history of linguistics.
Let us survey the proceedings of conferences COLING, ACL, ANLP, and JICAI (the latter in the linguistic part only) for the last decade and then imagine that we ask an ordinary Western specialist in computational and applied linguistics: “What do you know about the Meaning – Text model?” Most likely, we will hear the answer “What do you mean?” If it would be explained to him or her that it is the linguistic theory originated 35 years ago by Igor A. Mel’čuk in Russia and then developed by him in Canada and by the group of Yu. D. Apresian in Russia, the most widely-read of those cross-examined specialists maybe would recollect: “Ah, this is something from dependency grammars. It has nothing to do with English nor myself.” However, most likely he or she will keep silence.
It should be acknowledged that the first coming and one of the most advanced in the 70th applied linguistic theory, whose author collected to his lectures hundreds of listeners with very different interests in Russia of early 70s, was left essentially unclaimed in the West. As a result, even the editor of the book  by Mel’čuk, quite sympathizing to the author, was compelled to name him in the preface “great outsider”.
In spite of that, we believe that this linguistic theory is not outdated ideologically. It continues its developing both in application (cf., for example, the books [2, 3]) and especially in theoretical aspects [1, 4, 5], though purely applied English publications in its frameworks were rare enough, during the whole period of the evolution. Nevertheless, among the modern society of Western computational linguists nothing can be done to save it from the role of an eternal outsider. Several its features are already realized independently by Western applied linguistics, some other features are likely to be re-opened. With all of this, nobody would comprehend that just the re-opening is occurring, since quite new terms, formalisms, and algorithms will be proposed.
Below, we will first list the main features of the MTM as we comprehend them, beginning from the features already common with numerous other linguistic theories and continuing with the rest rather peculiar for the MTM till now. Further, we will formulate the reasons for the model to be practically ignored by Western scientific community.
Let us enumerate briefly those features of the MTM, which now, after expiration of many years, are considered common features, or even commonplaces, of the majority of modern linguistic theories.
Functionality of the model. Practically, all well-known linguistic models are functional now, i.e., they try to reproduce functions of language without reproducing the features of activity of brain, which is the motor of human language.
Opposition of the textual/phonetic form of language to its semantic representation. The Western manual , while depicting three different well-known syntactic theories (including a new variant of the theory by N. Chomsky), notices: “Language ultimately expresses a relation between sound at one end of the linguistic spectrum and meaning at the other”. Just as the indefinite notion spectrum is somehow determined, we have the same definition of language as in the MTM.
Generalizing character of language. Language is the theoretical generalization of the open and hence infinite set of utterances. This generalization operates with features, types, structures, levels, rules, etc., which are directly not observable. These theoretical constructions are fruits of linguist’s intuition and are to be repeatedly tested on new utterances.
Dynamic character of a model. A functional model not only proposes a set of notions, but also shows (by means of rules), how these notions can be used in the processing of utterances. This attains the correspondence between the model and the modeled language activity of humans.
Formal character of a model. The objective of a linguistic model is to construct a set of notions and rules so strict that they may be operated, being applied to any text or meaning, with the entire formality and without participation of model’s author or any other person. Meanwhile, the application of the rules to the given piece of linguistic information should always give the same result (or the same set of results). Any part of functional model can be principally expressed in a strict mathematical form as well. If no ready mathematical tool is available at present, a new tool should be elaborated.
Non-generating character of a model. Information does not arise during the functioning of a linguistic model; it merely accepts other form in its transition from textual representation to that of meaning and in the opposite direction. Though the form of an utterance changes within the model, the application to such changes of the Chomskian term transformation is somewhat disorienting. Mel’čuk has proposed for them the term equative transformations.
Independence of the model from the direction of transformation. The rules of linguistic transformations should be bi-directional or principally permitting the reversion.
Independence of algorithms from data. The description of a linguistic model is separated from algorithms using this description. The strict knowledge on language does not determine any specific algorithm. On the contrary, in many situations the algorithm implementing some rules can have numerous options. In the cases, when declarative linguistic knowledge is obtained with the highest possible consistency, the implementing algorithms turned to be rather universal, i.e., equally applicable to several languages. Such linguistic universality has something in common with Universal Grammar that Chomsky tries to create nowadays. The MTM does not lay the claim to the full universality, but in fact it prepares all necessary means for it.
Emphasis to detailed dictionaries. The main part of description of any language is connected with the description of various words in it. Hence, dictionaries are considered the main part of a strict linguistic description. Only very general common properties of vast classes or subclasses of lexemes are extracted from the dictionaries to create what is called computer grammars.
Let us pass now to those peculiarities of the MTM, which it retains as compared with other linguistic models. We will formulate these properties in the way they seem to us nowadays, as many as 25 years after the publishing of the fundamental work . In fact, we will not classify them as advantages or disadvantages.
Orientation to synthesis. With the announced equivalence of the directions of synthesis and analysis, the synthesis is considered primary and more important for the model. Just synthesis draws in all the knowledge about language, whereas analysis is possible on the base of a partial knowledge as well. As to the Western tradition, it makes stress on analysis of language, considering it more important for applications. Hence, such important for text parsing unification algorithms do not yet possess opposite options. Within any branch of the generative theory, it stays unclear how to transform a semantic network to a sequence of dependency or constituency trees corresponding to separate sentences.
When just synthesis is the topic, the theories different from those of analysis are created in Western paradigm. The cause lies in that the specific generative grammars describe all sentences of a language L, but only those possible semantic structures, which correspond to this set L. Meanwhile, each theory of synthesis should describe all possible semantic nets and sentences of the language that correspond to them. To our knowledge, synthesis of text starting from arbitrary semantic networks was thought over comprehensively only in the frame of the MTM.
Multilevel character of the model. According to the MTM, there exist several levels in language (textual, two morphological ones, two syntactical ones, semantic), and the representation of one level is considered fully equivalent to that of any other level. The equative transformer Meaning Þ Text and the opposite transformer Text Þ Meaning are broken up to several partial transformers, from one level to its adjacent. The disintegration of the model to levels is destined to simplify rules of inter-level transformations.
Variety of structures and formalisms. Each partial transformer has its own rules and formalisms, because of significant variety of structures reflecting data on different levels (strings, trees, networks, etc.). On each level, the MTM draws in just a minimally possible set of descriptive means. (Though it is not considered obligatory that in an application the partial transformers were algorithmically always ordered just in the sequence they stay in the model.) On the contrary, the modern Western scientific thought tries to find some common formalism for nearly all language levels.
Distinguishing deep and surface syntactical representations. The entities and syntactical features of these two levels are distinctly separated. Auxiliary and functional words of text disappear at the depth. Analogously, some characteristics of wordforms, being purely grammatical, present only on the surface (e.g., grammatical cases in Russian, Finnish, etc., or agreement features of adjectives in Spanish, Hebrew, etc.), whereas other determined by semantics retain on the deep levels as well. Such separation facilitates the minimization of descriptive means on each level. The notions of deep and surface syntactic levels we may see in Chomskian theory too, but he defines them in quite other way.
Independence between the composition of words and their order in a sentence. Generally, this is the property of a whole set of syntactic theories, not only of the MTM. The complete independence between these two issues is not postulated. They are implied by different factors. Formally, the distinguishing of these two issues leads to consistent use of dependency grammars on the syntactical level, rather than of constituency grammars as in the majority of Western theories. As a result, the basic rules of inter-level transformations turned to be quite different in the MTM, as compared with Western paradigm. The basic preference of dependency grammars is seen in that just the links between (meaningful) words retain on the semantic level, whereas for constituency grammars the semantic links are to be revealed by a separate mechanism.
Accounting communicative structure of text. Word composition depends on the contents of an utterance, whereas the word order depends both on this composition (grammatically determined ordering) and on the communicative structure of the text (impact of its division to theme Vs. rheme and to old Vs. new). In Western applied linguistics, the dependence of the word order in a sentence on communicative structure of text was not even noticed for a long time.
Orientation to languages of a type different from English. To a certain extent, the opposition between dependency and constituency grammars is connected with different types of languages. Dependency grammars are especially appropriate for languages with free word order like Russian or Latin, while constituency grammars suit for languages with strict word order like English. However, the MTM practically shows its ability to describe such languages as English, French, and German as well. A deep experience in operations with dependency trees is accumulated, for any language. Meanwhile, generative tradition (e.g., HPSG) moves to the same dependency trees, but in a step-by-step and implicit manner.
Means of synonymous variations and lexical functions. Only in the MTM calculus of lexical functions and rules of intra-level tree transformations with their use are developed. This is the mechanism of synonymous variations, inalienable and very important for the model. Maybe, it is the most important peculiarity of the MTM playing the key role in its means of synthesis (text generation), without any analog in generative tradition as to its profundity of elaboration. Just with the aid of the synonymous variations, realizable syntactical variants for a given semantic representation are searched, in the translation from one language to another. Lexical functions permit to standardize semantic representation as well, diminishing the basic variety of its nodes. From the application viewpoint, synonymous variation is elaborated on the same level of strictness as Western formalisms and is well tested practically, i.e. in the software implementation, for different languages. In the framework of Western paradigm, a similar task has likely not been formulated.
Labeling syntactic relations between words. As it is known, if the main constituent (head) is determined within each rule of a context-free grammar, just as in HPSD, any constituency tree is easily translated to the dependency tree equivalent with regard to information about links between words and word groups. However, there is additional property of trees in the MTM, i.e. labeling of all the arcs. Meanwhile, it is found that in specific languages, isomorphic trees can exist with different labels at corresponding arcs, and this difference is implied by the difference in meaning.
Government patterns. In contradistinction of subcategorization frames of generative linguistics, outwardly containing the same information, government patterns connect semantic and syntactic valencies of lexemes, and not only of verbs, but other parts of speech as well. Hence, GP permits explicitly indicate through what variants each semantic valency is represented on the more superficial level: with a noun only, with this preposition and a noun, with any of these prepositions and a noun, with an infinitive, etc. Meanwhile, subcategorization usually is reduced just to a list of all possible combinations of syntactical valencies with their fixed ordering in a phrase. In languages with rather free word order, the number of such frames for specific verbs can reach several tens, and this obscures the whole picture of semantic valencies. Additionally, the variety of verb groups with the same combination of frames can be quite comparable with the total number of verbs in the language.
Keeping traditions and terminology of classical linguistics. The MTM treats the heritage of classical linguistics much more carefully than Western computational linguistics. In its lasting development, the MTM has shown that in the majority of cases, even the increased accuracy of description and the necessity in a strict formalism permits to conserve already known terminology, maybe after giving more strict definitions to terms. The notions of lexeme, morpheme, morph, grammatical subject, object, predicate, circumstance, etc., were retained.
In the frameworks of generative linguistics, the theory is constructed each time nearly from zero, without attempts to explicate relevant phenomena in terms already known in general linguistics. This does not always give an additional strictness, more often this implies to terminological confusion, since specialists in the adjacent fields merely do not comprehend each other.
Generating all synonymous variants at once. In his early works, I. Mel’čuk has announced that his model is destined to generate in parallel all synonymous variants of the same meaning at once (and revealing all possible parses of this sentence, if it is homonymous). As a theoretical formula of meanings to text correspondences, it was quite appropriate, but some scientists might comprehend it as necessity of such programs and a promise to create just them. Even if the MTM principally gives such a possibility, in the practical text generation only one true variant is really needed.
Slogan of atomization of semantics. It was careless to take nearly as obligation to realize in the frameworks of this model a system of semantic atoms (semes), through which any meanings may be represented. This goal is reached neither in this nor in any other theory. Till now, it is debatable how many such atoms are sufficient. Everybody happily agreed with the idea that it is sufficient to disintegrate meanings of words to a reasonable limit implied by the application. Thus, the translation from one cognate language to another could not need such disintegration at all.
As to disintegration of meanings within, say, Indo-European languages, limited and equally acceptable to different researchers, it is potentially possible, but requires, for big dictionaries, decades of sustained lexicographic labor. One of direction of such a labor is initiated by Yu. D. Apresian and it includes the development of synonymy dictionaries of a special type, with extraction of all semantic entities distinguishing non-absolute synonyms. . Without such development, not yet accompanied with a full formalization, it is impossible to imagine future systems of text “understanding”.
There do not exist till now the rules of word ordering implied by communicative organization of phrases. The matter is that, till now, there is no adequate theory of communicative organization of sentences and of a text as a whole.
In languages like English, where communicative organization is usually not expressed outwardly (the word order is strictly fixed), many researchers merely ignore this topic. Nevertheless in the MTM, where the word composition and word order are considered separately, such ignoring seems like the introduction of a superfluous entity, without any formal means to operate it.1
Technology of compilation of explanatory combinatorial dictionaries not elaborated. Till now, their compilation is feasible only for those who were mastering the model many years and in essence were creating and perfecting it for long time. Maybe just the absence of a clear and popular technology of development of the dictionaries turned to be one of the main historical reasons of its practical lag as compared with its Western “competitors”, and we intent to discuss this topic later in more detail.
Let us try now to comprehend, why Meaning – Text model did not find noticeable support in the West, in spite of that its author lives in Canada more than 20 years and has spent much time for popularization of his own ideas. In our opinion, the reasons are as follows.
Prematurity. Historically, the MTM was induced and elaborated by the scientists with a huge erudition and in the country with profound traditions in the Humanities, but long before any real possibilities of its software realization.
From the very beginning, the creators of the MTM clearly realized the tremendous objective complexity of the problem. They had theoretically elaborated in detail a multiplicity of variants, exceptions, and rare and complicated cases. They had proposed a complicated structure of dictionaries to contain the gigantic volume of lexical information. As a result, the first attempts of practical realization of the model were simply drowned in complexity and size of the work, especially under unavailability of powerful computers and computational resources in Russia in the time of the development of the model.
On the contrary, the first toy Chomskian-style grammars were enthusiastically met in the West by the whole army of specialists (and not specialists) having an access to real computers, text corpora, and though them to real customers and commercial projects. For realization of such projects even these simple grammar were sufficient. The opportunity of the quick success with the growing interest and material support provoked rapid and consecutive development of the theory and then creation of a numerous scientific school.
The Western tradition approaches the comprehension that a real scientific success is neither quick nor easy only 30 years later, but with a luggage of formalisms and their programming implementations accumulated by tiny steps, with an experience of both elaboration of theories and of their teaching, as well as with the army of educated professionals and with the credit of confidence on behalf of scientific and financial organizations.
Anglophonic isolationism. As early as in the beginning of 20 century, it was proposed to describe English by means of immediate constituents. The division of sentences to the constituents seemed to be directly observable in a text, i.e., uniquely “objective”, and each new observation on the basis of English just confirmed applicability of such a division. As a result, the method of constituents became absolutely universal in the eyes of anglophonic researchers, even uniquely possible.
Scientific Europe did not hurry to join this method. Just in 30’s L. Tesniére induced dependency trees that were known long before informally in the practice of description of many languages, particularly for Russian.
When generative grammars by N. Chomsky had appeared in United States, with context-free grammars outwardly so well suited to the constituency method, this decision appeared to be uniquely possible from theoretical viewpoint as well. In other words, the description of natural languages by the means of generative grammars seemed quite adequate at this stage.
The further development of the generative tradition included the propagation of methods rooting at English syntax to other languages and to complicated cases of English proper. Certain successes in this path had generated an illusion that this method is quite universal. Well, it does describe cases in Icelandic and clitics in French, is it not enough for anyboby? As the matter of fact, until very recently this path was rather adaptation of one structure for description of other structure . However, to do justice to the MTM, it takes roots in Russian, whereas the morphological method of the coding of syntactical relations so characteristic to this language is not so good for Western languages and thus is not too interesting.
Financial power of United States. Chomskian grammars appeared in 50s, whereas the last 50s to the early 60s are characterized in USA with an unprecedented support of fundamental sciences by the state. Losing the first round of the game for the sputnik, America has considered it good to support the exact sciences, all of them at once, especially since the money permits it. Among other, the formal grammars turned to be in benefit. The number of scientific (purely mathematical, as a rule) works grew rapidly, mathematical linguistics had formed as a separate science and began to be taught in big universities of the country.
Further, the sobering stage came in connection with the universal applicability of context-free grammars to natural languages and, simultaneously, their triumph in applications to programming languages. New linguistic models appeared, but on the basis of already studied ones. Young mathematicians and programmers have drawn into the cognitive process. Groups had formed with the scientific area named computational linguistics. Meanwhile, the financial power of the USA government agencies became a basis of rapid evolution of such groups.
When Europe got richer to support the movement more strongly, the methods were already stable. Factually, those methods became more fashionable which have been better financially supported.
Inertia of higher school. Let us imagine the cyclic process of reproducing of PhDs in the higher school of the West. A doctor in the field of computational linguistics prepares his own successors, year by ear, generating new PhDs in the same specialty. Those newly fledged PhDs that come then to corporations developing lingware know only the theory they were taught. Even if it is not so good for bringing their program to perfection, they lack time to search any other theory, it is necessary to program. Those who remained in the higher school are reproducing their knowledge to the next generation of PhDs, in the framework of the fully formed tradition. It is necessary to be rather bold and well prepared for to bring into an already formed tradition a significantly new element.
The inertia of the higher school opposes to any innovations, especially when it promises something better not for the native and thus important language, but for an exotic one, such as Icelandic or Russian, and besides in the indefinite future.
Unshakable belief in computer abilities, not in humans’ ones. Before our eyes, the computers have increased their performance and memory size by many times. For those well acquainted with programming, but rather poorly with linguistics this seems like a direct prove that on some level of its development computers could solve any problem “by themselves”, by gross electronic efforts, i.e. by immense sorting among various possible solutions. Perhaps the newest IBM chess program confirmed this tendency, when it had gained a game against Kasparov?
If it is true, then why we should try ourselves to comprehend subtle mechanisms of language or to manually translate to the “human” (i.e. programming) language the knowledge proposed by linguistic theoreticians in general and by outsiders among them in particular? Thus, we become at an easy distance from the verdict: “Since this approach had not become popular till now, it did not reach a necessary level of maturity, so that it should neither be known nor even taught.” (The authors have got such a formula in one of reviews.) Well, let the thrown down prove by themselves their necessity for the world.
Requirement of immediate success. Any new theories in computational linguistics begin now from formalisms, then they advance with new algorithms or methods of embedding of the new formalisms into old algorithms (say, unification ones). On this step, a new doctoral thesis usually stops. Then a sobering stage is possible for those trying to apply the new theory. In its frameworks, it is necessary to describe several thousands (or better several tens of thousands) of lexemes, i.e., to create manually or semi-automatically (the pure automatism is not reached yet by anybody) some type of a new dictionary. A technology is needed for such an enterprise, so that those to whom the doctoral degree is not to be promised might use it properly.
If the theory is very good to explain something new or it promises significant preferences in the practice, and its author and/or the project head is sufficiently energetic to get money for creation of a great “lexical resource”, a new dictionary could appear. But the directive is valid for Western government officials against the great projects, to support only short-term of them, maximally for two or three years. However, within this time it is impossible to create a big dictionary, let it be in a printed or a computer form. (We may comprehend these officials, since the value of a scientist is too big in the West, as well as their responsibility for true expenditures of the state means.) Thus, in the best case, the dictionary would grow parts by parts. Naturally, this will be dictionary original in its ideology and technology of compilation, rather than created according to one’s else theory hardly suited for obtaining a new academic degree, neither according to any other theory lacking in essence any technological support.
Prestige and ambitions. This is next purely human factor. It is pleasing and profitable (and sometimes quite necessary for getting a position in the university or for the project financing) to be author of a new fashionable theory. In the same time, it is necessary to be somewhat an altruist in our mercenary world, to consciously support someone else’s theory and by everyday work to assist its promotion to the scientific circulation. It is easier to invent new title, terminology, and formalism without significant deviation from the “mainstream”, to save comprehension on the side of those who had invented similar theoretical means in the past.
It seems that any of factors depicted above might be sufficient for rejecting and ignoring of the MTM. In the history of the modern science, it sometimes happens that not the best scientific paradigm wins, but which has, more or less arbitrarily, a greater financial support, or a greater energy and opportunities of their creators, i.e., the circumstances not directly concerning the science.
Let as to briefly summarize, in what a way Western applied linguistics evolved, not even making notice of the MTM, and then to repeat our main conclusion.
The MTM appeared as the first in the former USSR “cybernetic” theory in the field of linguistics that has additionally applied objectives, i.e., to create bi-directional linguistic processor. This theory brought in an explication of many scientific facts and proposed methods of their formal or quasi-formal description. It was one of the most comprehensive, complete, and clearly worded theories of that time, and hence it had attracted attention of a lot of interested observers and a considerable number of followers in the former USSR.
Nevertheless, this theory was not the first for the West. The West already had its own apostle, N. Chomsky. The fundamental book of I. Mel’čuk remained even without translation to foreign languages, and being well known in the USSR, did not acquired its readers in the West.
The West was going its way in the similar direction. Much intellectual effort was spent (by N. Chomsky as well) to get over with deficiencies of the initial premises of Chomskian theories. After all, the transformations were abandoned, tools similar to government patterns were introduced, and the necessity to immediately account dependencies between words and to separately consider their order was evolutionarily admitted. The invention of simple and powerful unification algorithms has simplified the solution parsing tasks.
The recent decade is characterized with projects on development of big computer dictionaries, since even the most general grammar formalisms do not yet solve practical problems of natural language processing. There appeared theories of text synthesis independent from the theories of analysis.
The evident practical successes of Western tradition have led many scientists to the strong conviction that any revolutionary revisions of the main evolution line are not needed, and if there is something really deserving their attention in outsiders’ theories, in the proper time it will inevitably converge to the mainstream, in some unassisted way.
Since during the recent three decades, the full amount of efforts made in the world for development of the MTM was incomparable less, many its aspects were left underdeveloped and sometimes seem just good intentions. As to applications, it did not managed with convenient formalisms, e.g., for syntactical parsing. A conceivable to novices technology of creation of big explanatory combinatorial dictionaries was not achieved either.
In these conditions, we may establish that the MTM became a commonplace in some of its formulations or even somewhat obsolete. But it is not outdated in its general idea. It stays to be true and fruitful in the majority of its premises, and the development of Western paradigm only confirms this. Even if the model would be modernized just now in a minimal degree, it still could be a source of ideas and approaches novel to West applied linguistics.
Now let us reason a little bit on whether the computer scientists really need a generalizing (‘through’) model of language.
If we look at the modern theoretical linguistics, we can see that certain researchers study phonetics, the other ones morphology, the third ones syntax, and the fourth ones semantics and pragmatics. Within phonetics, somebody became absorbed in accentuation, within semantic in speech acts, etc. There is no limit to the subdivision of the great linguistic science, as well as there is seemingly no necessity to occupy oneself once more, after ancient Greeks, F. de Saussure, and N. Chomsky, with the general philosophical question “What is natural language and what should its generalizing model be?”
The criterion of truth in linguistic theoretical research stays its logicality, consistency, and correspondence of the theory’s author’s intuitive conceptions about the given linguistic phenomenon and that of the other members of the linguistic community. The linguistic examples should often be considered the evidence of some theory just under the prism of their logicality, consistency, and correspondence to the common intuition.
In this sense, many works by I. Mel’čuk (e.g., in morphology ), as well as by majority of other theoretical linguists, seem to be important stages of inner development of science, and it is not even appropriate to classify them into those scientific paradigms in whose frameworks they have first appeared.
The situation in applied linguistics is somewhat different. Here the criterion of truth is the proximity between the results of functioning of a program for processing language utterances and the ideal performance determined by mental abilities of an average human bearer of the language. Since the processing procedure because of its complexity should be split into several stages, a ‘through’ model is really necessary to determine which formal features and structures are to be assigned to the utterances and to the language as a whole on each stage and how these features should participate in each stage of linguistic transformations within a computer.
Yu.D. Apresian speaks about the rise of experimental linguistics on this basis. It looks as if in the future, experimental tests of the deepest results in the most “branch-oriented” linguistic theories will be inevitable and important element of evolution of this science as a whole. As to applied linguistics, the experimentation turned to be principal just now, and it is directly influenced by the structures selected for language description and the sequence of transformations recommended by the theory.
Therefore, the quite philosophical problem of linguistic model turned to be primordial for applied linguistics. In the foreseeable future, applied specialists will be forced to select such models and to distinguish them from purely mathematical and algorithmic formalisms. The models accumulate facts of a language, while the formalisms help to correctly apply these facts to practice. The formalisms are changing more rapidly than models and this tendency might be clearly seen in the topic just depicted.
 Mel’čuk, Igor A. Dependency Syntax: Theory and Practice. SUNY Publ., NY, 1988.
 Apresian, Yu. D. et al. Lingware of the system Etap-2 (in Russian) Moscow, Nauka Publ., 1989.
 Apresian, Yu. D. et al. Linguistic processor for complicated informational systems (in Russian). Moscow, Nauka Publ., 1992
 Steele, James (ed.). Meaning–Text Theory. Linguistics, Lexicography, and Implications. University of Ottawa Press, 1990.
 Mel’čuk, Igor A. Course of general morphology vv. 1-4. Wien, 1997.
 Sells, Peter. Lectures on Contemporary Syntactic Theories. CSLI Publ., Stanford, 1985.
 Sag, Ivan A., and Thomas Wasow. Syntactic theory: A Formal introduction. CSLI Publ., Stanford, 1999.
 Mel’čuk, I. A. An experiment of theory of linguistic Meaning – Text models (in Russian), Moscow, Nauka Publ., 1974.
 Apresian, Yu. D. Lexical semantics. Synonymous means of language (in Russian). Moscow, Vostochnaya Literatura Publ., 1995.
 Gelbukh, A. F. Between text and meaning (in Russian, abstract in English). Proceedings of International Workshop Dialog’99, Tarusa, Russia, 1999.
1Already after these lines were written, Prof. I. Mel’čuk amiably proposed us, for the preliminary familiarizing,his unpublished book on communicative structures. It basically fills in the gapе in the theory. Nevertheless, since that book does not yet enter in the scientific circulation, we have decided not to change our formulations here.