Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Skip to main content
  • - Artificial Intelligence (Formal Ontology, Semantics, NLU, Commonsense Reasoning) - Object-oriented software enginee... moreedit
We are in ihe process of building DALIA-an environment for distributed, artificial, and linguistically competent intelligent agents that communicate in natural language and perform commonsense reasoning in a highly dynamic and uncertain... more
We are in ihe process of building DALIA-an environment for distributed, artificial, and linguistically competent intelligent agents that communicate in natural language and perform commonsense reasoning in a highly dynamic and uncertain environment. There are several challenges in this effort that we do not touch on in this paper. Instead, we focus here on the design of a virtual marketplace where buying and selling agents that learn from experience negotiate autonomously on behalf of their clients. Buying and selling agents enter the ...
It is by now widely accepted that a number of tasks in natural language understanding (NLU) require the storage of and reasoning with a vast amount of background (commonsense) knowledge. While several efforts have been made to build such... more
It is by now widely accepted that a number of tasks in natural language understanding (NLU) require the storage of and reasoning with a vast amount of background (commonsense) knowledge. While several efforts have been made to build such ontologies, a consensus on a scientific methodology for ontological design is yet to emerge. In this paper we suggest an approach to building a commonsense ontology for language understanding using language itself as a design guide. The idea is rooted in Frege's conception of compositional ...
The purpose of this paper is twofold: (i) we will argue that formal semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts, namely ontological concepts, that should... more
The purpose of this paper is twofold: (i) we will argue that formal semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts, namely ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of, and relations between, objects of various ontological types; and (ii) we show that accounting for these differences amounts to a new formal semantics; one that integrates lexical and compositional semantics in one coherent framework and one where formal semantics is embedded with a strongly typed ontology; an ontology that reflects our commonsense knowledge of the world and the way we talk about it in ordinary language. We will show how in such a framework a number of challenges in the semantics of natural language are adequately and systematically treated.
A mental state model for autonomous agent negotiation in is described. In this model, agent negotiation is assumed to be a function of the agents' mental state (attitude) and their prior experiences. The mental state model we... more
A mental state model for autonomous agent negotiation in is described. In this model, agent negotiation is assumed to be a function of the agents' mental state (attitude) and their prior experiences. The mental state model we describe here subsumes both competitive and cooperative agent negotiations. The model is first instantiated by buying and selling agents (competitively) negotiating in a virtual marketplace. Subsequently, it is shown that agent negotiations tend to be more cooperative than competitive as agents tend to agree (more ...
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense... more
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic 'discovery' of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the long-awaited goal of a meaning algebra.
The Winograd Schema (WS) challenge has been proposed as an alternative to the Turing Test as a test for machine intelligence. In this short paper we "situate" the WS challenge in the data-information-knowledge continuum, suggesting in the... more
The Winograd Schema (WS) challenge has been proposed as an alternative to the Turing Test as a test for machine intelligence. In this short paper we "situate" the WS challenge in the data-information-knowledge continuum, suggesting in the process what a good WS is. Furthermore, we suggest that the WS is a special case of a more general phenomenon in language understanding, namely the phenomenon of the "missing text". In particular, we will argue that what we usually call thinking in the process of language understanding almost always involves discovering the missing text - text is rarely explicitly stated but is implicitly assumed as shared background knowledge. We therefore suggest extending the WS challenge to include tests beyond those involving reference resolution, including examples that require discovering the missing text in situations that are usually treated in computational linguistics under different labels, such as metonymy, quantifier scope ambiguity, lexical disambiguation, and co-predication, to name a few.
In our opinion the exuberance surrounding the relative success of datadriven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested... more
In our opinion the exuberance surrounding the relative success of datadriven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic nature, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e.g., nominal compounds, copredication, quantifier scope ambiguities, intensional contexts. Since we believe the relative success of data-driven large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate but a reflection on applying the successful strategy of a bottom-up reverse engineering of language at scale, we suggest in this paper applying the effective bottom-up strategy in a symbolic setting resulting in symbolic, explainable, and ontologically grounded language models.
The purpose of this paper is twofold: (i) we will argue that formal semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts, namely ontological concepts, that should... more
The purpose of this paper is twofold: (i) we will argue that formal semantics
might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts, namely ontological concepts, that should
be types in a strongly-typed ontology, and logical concepts, that are predicates
corresponding to properties of, and relations between, objects of various ontological types; and (ii) we show that accounting for these differences amounts to
a new formal semantics; one that integrates lexical and compositional semantics
in one coherent framework and one where formal semantics is embedded with a
strongly typed ontology; an ontology that reflects our commonsense knowledge
of the world and the way we talk about it in ordinary language. We will show
how in such a framework a number of challenges in the semantics of natural
language are adequately and systematically treated.
We describe a mental state model for agents negotiating in a virtual marketplace. Buying and selling agents enter the marketplace with an attitude formulated as a complex function of prior experiences, market conditions, product... more
We describe a mental state model for agents negotiating in a virtual marketplace. Buying and selling agents enter the marketplace with an attitude formulated as a complex function of prior experiences, market conditions, product information, as well as personal characteristics such as importance of time versus importance of price and the commitment level to the purchase or sale of the product. While buying and selling agents are generally assumed to have opposing interests, the mental state model can be extended to other ...
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the... more
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the semantics of various natural language phenomena may become nearly trivial.
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link... more
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link with our common-sense view of the world, and it is quite difficult to envision how one can formally account for the considerable amount of content that is often implicit, but almost never explicitly stated in our everyday discourse. The solution, in our opinion, is a compositional semantics grounded in an ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In the compositional logic we envision there are ontological (or first-intension) concepts, and logical (or second-intension) concepts, and where the ontological concepts include not only Davidsonian events, but other abstract objects as well (e.g., states, processes, properties, activities, attributes, etc.) It will be demonstrated here tha...
The Winograd Schema (WS) challenge, proposed as an al-ternative to the Turing Test, has become the new standard for evaluating progress in natural language understanding (NLU). In this paper we will not however be concerned with how this... more
The Winograd Schema (WS) challenge, proposed as an al-ternative to the Turing Test, has become the new standard for evaluating progress in natural language understanding (NLU). In this paper we will not however be concerned with how this challenge might be addressed. Instead, our aim here is threefold: (i) we will first formally 'situate' the WS challenge in the data-information-knowledge continuum, suggesting where in that continuum a good WS resides; (ii) we will show that a WS is just special case of a more general phenomenon in language understanding, namely the missing text phenomenon (henceforth, MTP) - in particular, we will argue that what we usually call thinking in the process of language understanding involves discovering a significant amount of 'missing text' - text that is not explicitly stated, but is often implicitly assumed as shared background knowledge; and (iii) we conclude by a brief discussion on why MTP is inconsistent with the data-driven and m...
We suggest modeling concepts as types in a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In such a framework, certain types of ambiguities in natural language... more
We suggest modeling concepts as types in a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In such a framework, certain types of ambiguities in natural language are explained by the notion of polymorphism. In this paper we suggest such a typed compositional semantics for nominal compounds of the form (Adj Noun) where adjectives are modeled as higher-order polymorphic functions. In addition to (Adj Noun) compounds our proposal seems also to ...
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical... more
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types. We will then show that accounting for these differences amounts to the integration of lexical and compositional semantics in one coherent framework, and to an embedding in our logical semantics of a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. We will show that in such a framework a number of challenges in natural language semantics can be adequately and systematically treated.
Quantification in natural language is an important phenomena that seems to touch on some pragmatic and inferential aspects of language understanding. In this paper we focus on quantifier scope ambiguity and suggest a cognitively plausible... more
Quantification in natural language is an important phenomena that seems to touch on some pragmatic and inferential aspects of language understanding. In this paper we focus on quantifier scope ambiguity and suggest a cognitively plausible model that resolves a number of problems that have traditionally been addressed in isolation. Our claim here is that the problem of quantifier scope ambiguity can not be adequately addressed at the syntactic and semantic levels, but is an inferencing problem that must be addressed at the pragmatic and discourse levels 1 . Quantification in Natural Language Quantification is an important phenomena that seems to touch on a number of inferential and pragmatic aspects of language understanding. To illustrate the "pragmatic" aspect of this problem, consider the following examples involving the problem of quantifier scope ambiguity: (1) John advertised a restaurant on every street (2) John visited a restaurant on every street In compositiona...
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link... more
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link with our common-sense view of the world, and it is quite difficult to envision how one can formally account for the considerable amount of content that is often implicit, but almost never explicitly stated in our everyday discourse. The solution, in our opinion, is a compositional semantics grounded in an ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In the compositional logic we envision there are ontological (or first-intension) concepts, and logical (or second-intension) concepts, and where the ontological concepts include not only Davidsonian events, but other abstract objects as well (e.g., states, processes, properties, activities, attributes, etc.) It will be demonstrated here tha...
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language... more
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.
Over two decades ago a "quite revolution" overwhelmingly replaced knowledgebased approaches in natural language processing (NLP) by quantitative (e.g., statistical, corpus-based, machine learning) methods. Although it is our... more
Over two decades ago a "quite revolution" overwhelmingly replaced knowledgebased approaches in natural language processing (NLP) by quantitative (e.g., statistical, corpus-based, machine learning) methods. Although it is our firm belief that purely quantitative approaches cannot be the only paradigm for NLP, dissatisfaction with purely engineering approaches to the construction of large knowledge bases for NLP are somewhat justified. In this paper we hope to demonstrate that both trends are partly misguided and that the time has come to enrich logical semantics with an ontological structure that reflects our commonsense view of the world and the way we talk about in ordinary language. In this paper it will be demonstrated that assuming such an ontological structure a number of challenges in the semantics of natural language (e.g., metonymy, intensionality, copredication, nominal compounds, etc.) can be properly and uniformly addressed.
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the... more
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the semantics of various natural language phenomena may become nearly trivial.
It is by now widely accepted that a number of tasks in natural language understanding (NLU) require the storage of and reasoning with a vast amount of background (commonsense) knowledge. While several efforts have been made to build such... more
It is by now widely accepted that a number of tasks in natural language understanding (NLU) require the storage of and reasoning with a vast amount of background (commonsense) knowledge. While several efforts have been made to build such ontologies, a consensus on a scientific methodology for ontological design is yet to emerge. In this paper we suggest an approach to building a commonsense ontology for language understanding using language itself as a design guide. The idea is rooted in Frege's conception of compositional ...
A system, method, and related techniques are disclosed for scoring user responses to constructed response test items. The system includes a scoring engine for receiving a user response to a test question and evaluating the response... more
A system, method, and related techniques are disclosed for scoring user responses to constructed response test items. The system includes a scoring engine for receiving a user response to a test question and evaluating the response against a scoring rubric. The scoring rubric may include a binding stage, an assertion stage, and a scoring stage. Furthermore, the system includes a database for referencing elements used by the scoring engine which may comprise objects, object sets, attributes of objects, and transformations ...
ABSTRACT
Research Interests:
ABSTRACT

And 31 more

Large language models (LLMs) have achieved a milestone that undeniably changed many held beliefs in artificial intelligence (AI). However, there remains many limitations of these LLMs when it comes to true language understanding,... more
Large language models (LLMs) have achieved a milestone that undeniably changed many held beliefs in artificial intelligence (AI). However, there remains many limitations of these LLMs when it comes to true language understanding, limitations that are a byproduct of the underlying architecture of deep neural networks. Moreover, and due to their subsymbolic nature, whatever knowledge these models acquire about how language works will always be buried in billions of microfeatures (weights), none of which is meaningful on its own, making such models hopelessly unexplainable. To address these limitations, we suggest combining the strength of symbolic representations with what we believe to be the key to the success of LLMs, namely a successful bottom-up reverse engineering of language at scale. As such we argue for a bottom-up reverse engineering of language in a symbolic setting. Hints on what this project amounts to have been suggested by several authors, and we discuss in some detail here how this project could be accomplished. We know any object only through predicates that we can say or think of it.
Research Interests:
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical... more
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types. We will then show that accounting for these differences amounts to the integration of lexical and compositional semantics in one coherent framework, and to an embedding in our logical semantics of a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. We will show that in such a framework a number of challenges in natural language semantics can be adequately and systematically treated.
Research Interests:
Research Interests: