Impasse to the Measure of Being. Thought of the Day 47.0

IMG_7622-1038x576

The power set p(x) of x – the state of situation x or its metastructure (Alain Badiou – Being and Event) – is defined as the set of all subsets of x. Now, basic relations between sets can be expressed as the following relations between sets and their power sets. If for some x, every element of x is also a subset of x, then x is a subset of p(x), and x can be reduced to its power set. Conversely, if every subset of x is an element of x, then p(x) is a subset of x, and the power set p(x) can be reduced to x. Sets that satisfy the first condition are called transitive. For obvious reasons the empty set is transitive. However, the second relation never holds. The mathematician Georg Cantor proved that not only p(x) can never be a subset of x, but in some fundamental sense it is strictly larger than x. On the other hand, axioms of set theory do not determine the extent of this difference. Badiou says that it is an “excess of being”, an excess that at the same time is its impasse.

In order to explain the mathematical sense of this statement, recall the notion of cardinality, which clarifies and generalizes the common understanding of quantity. We say that two sets x and y have the same cardinality if there exists a function defining a one-to-one correspondence between elements of x and elements of y. For finite sets, this definition agrees with common intuitions: if a finite set y has more elements than a finite set x, then regardless of how elements of x are assigned to elements of y, something will be left over in y precisely because it is larger. In particular, if y contains x and some other elements, then y does not have the same cardinality as x. This seemingly trivial fact is not always true outside of the domain of finite sets. To give a simple example, the set of all natural numbers contains quadratic numbers, that is, numbers of the form n2, as well as some other numbers but the set of all natural numbers, and the set of quadratic numbers have the same cardinality. The correspondence witnessing this fact assigns to every number n a unique quadratic number, namely n2.

Counting finite sets has always been done via natural numbers 0, 1, 2, . . . In set theory, the concept of such a canonical measure can be extended to infinite sets, using the notion of cardinal numbers. Without getting into details of their definition, let us say that the series of cardinal numbers begins with natural numbers, which are directly followed by the number ω0, that is, the size of the set of all natural numbers , then by ω1, the first uncountable cardinal numbers, etc. The hierarchy of cardinal numbers has the property that every set x, finite or infinite, has cardinality (i.e. size) equal to exactly one cardinal number κ. We say then that κ is the cardinality of x.

The cardinality of the power set p(x) is 2n for every finite set x of cardinality n. However, something quite paradoxical happens when infinite sets are considered. Even though Cantor’s theorem does state that the cardinality of p(x) is always larger than x – similarly as in the case of finite sets – axioms of set theory never determine the exact cardinality of p(x). Moreover, one can formally prove that there exists no proof determining the cardinality of the power sets of any given infinite set. There is a general method of building models of set theory, discovered by the mathematician Paul Cohen, and called forcing, that yields models, where – depending on construction details – cardinalities of infinite power sets can take different values. Consequently, quantity – “a fetish of objectivity” as Badiou calls it – does not define a measure of being but it leads to its impasse instead. It reveals an undetermined gap, where an event can occur – “that-which-is-not being-qua-being”.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2

1c67cadc0cd0f625e03b399121febd15--category-theory-mathematics

The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.

Conjuncted: Internal Logic. Thought of the Day 46.1

adler-3DFiltration1

So, what exactly is an internal logic? The concept of topos is a generalization of the concept of set. In the categorial language of topoi, the universe of sets is just a topos. The consequence of this generalization is that the universe, or better the conglomerate, of topoi is of overwhelming amplitude. In set theory, the logic employed in the derivation of its theorems is classical. For this reason, the propositions about the different properties of sets are two-valued. There can only be true or false propositions. The traditional fundamental principles: identity, contradiction and excluded third, are absolutely valid.

But if the concept of a topos is a generalization of the concept of set, it is obvious that the logic needed to study, by means of deduction, the properties of all non-set-theoretical topoi, cannot be classic. If it were so, all topoi would coincide with the universe of sets. This fact suggests that to deductively study the properties of a topos, a non-classical logic must be used. And this logic cannot be other than the internal logic of the topos. We know, presently, that the internal logic of all topoi is intuitionistic logic as formalized by Heyting (a disciple of Brouwer). It is very interesting to compare the formal system of classical logic with the intuitionistic one. If both systems are axiomatized, the axioms of classical logic encompass the axioms of intuitionistic logic. The latter has all the axioms of the former, except one: the axiom that formally corresponds to the principle of the excluded middle. This difference can be shown in all kinds of equivalent versions of both logics. But, as Mac Lane says, “in the long run, mathematics is essentially axiomatic.” (Mac Lane). And it is remarkable that, just by suppressing an axiom of classical logic, the soundness of the theory (i.e., intuitionistic logic) can be demonstrated only through the existence of a potentially infinite set of truth-values.

We see, then, that the appellation “internal” is due to the fact that the logic by means of which we study the properties of a topos is a logic that functions within the topos, just as classical logic functions within set theory. As a matter of fact, classical logic is the internal logic of the universe of sets.

Another consequence of the fact that the general internal logic of every topos is the intuitionistic one, is that many different axioms can be added to the axioms of intuitionistic logic. This possibility enriches the internal logic of topoi. Through its application it reveals many new and quite unexpected properties of topoi. This enrichment of logic cannot be made in classical logic because, if we add one or more axioms to it, the new system becomes redundant or inconsistent. This does not happen with intuitionistic logic. So, topos theory shows that classical logic, although very powerful concerning the amount of the resulting theorems, is limited in its mathematical applications. It cannot be applied to study the properties of a mathematical system that cannot be reduced to the system of sets. Of course, if we want, we can utilize classical logic to study the properties of a topos. But, then, there are important properties of the topos that cannot be known, they are occult in the interior of the topos. Classical logic remains external to the topos.

Hegel and Topos Theory. Thought of the Day 46.0

immagine483037

The intellectual feat of Lawvere is as important as Gödel’s formal undecidability theorem, perhaps even more. But there is a difference between both results: whereas Gödel led to a blind alley, Lawvere has displayed a new and fascinating panorama to be explored by mathematicians and philosophers. Referring to the positive results of topos theory, Lawvere says:

A science student naively enrolling in a course styled “Foundations of Mathematics” is more likely to receive sermons about unknowability… than to receive the needed philosophical guide to a systematic understanding of the concrete richness of pure and applied mathematics as it has been and will be developed. (Categories of space and quantity)

One of the major philosophical results of elementary topos theory, is that the way Hegel looked at logic was, after all, in the good track. According to Hegel, formal mathematical logic was but a superficial tautologous script. True logic was dialectical, and this logic ruled the gigantic process of the development of the Idea. Inasmuch as the Idea was autorealizing itself through the opposition of theses and antitheses, logic was changing but not in an arbitrary change of inferential rules. Briefly, in the dialectical system of Hegel logic was content-dependent.

Now, the fact that every topos has a corresponding internal logic shows that logic is, in quite a precise way, content-dependent; it depends on the structure of the topos. Every topos has its own internal logic, and this logic is materially dependent on the characterization of the topos. This correspondence throws new light on the relation of logic to ontology. Classically, logic was considered as ontologically aseptic. There could be a multitude of different ontologies, but there was only one logic: the classical. Of course, there were some mathematicians that proposed a different logic: the intuitionists. But this proposal was due to not very clear speculative epistemic reasons: they said they could not understand the meaning of the attributive expression “actual infinite”. These mathematicians integrated a minority within the professional mathematical community. They were seen as outsiders that had queer ideas about the exact sciences. However, as soon as intuitionistic logic was recognized as the universal internal logic of topoi, its importance became astronomical. Because it provided, for the first time, a new vision of the interplay of logic with mathematics. Something had definitively changed in the philosophical panorama.

Noneism. Part 2.

nPS6M

Noneism is a very rigourous and original philosophical doctrine, by and large superior to the classical mathematical philosophies. But there are some problems concerning the different ways of characterizing a universe of objects. It is very easy to understand the way a writer characterizes the protagonists of the novels he writes. But what about the characterization of the universe of natural numbers? Since in most kinds of civilizations the natural numbers are characterized the same way, we have the impression that the subject does not intervene in the forging of the characteristics of natural numbers. These numbers appear to be what they are, with total independence of the creative activity of the cognitive subject. There is, of course, the creation of theorems, but the potentially infinite sequence of natural numbers resists any effort to subjectivize its characteristics. It cannot be changed. A noneist might reply that natural numbers are non-existent, that they have no being, and that, in this respect, they are identical with mythological Objects. Moreover, the formal system of natural numbers can be interpreted in many ways: for instance, with respect to a universe of Skolem numbers. This is correct, but it does not explain why the properties of some universes are independent from subjective creation. It is an undeniable fact that there are two kinds of objectual characteristics. On the one hand, we have the characteristics created by subjective imagination or speculative thought; on the other hand, we find some characteristics that are not created by anybody; their corresponding Objects are, in most cases, non-existent but, at the same time, they are not invented. They are just found. The origin of the former characteristics is very easy to understand; the origin of the last ones is, a mystery.

Now, the subject-independence of a universe, suggests that it belongs to a Platonic realm. And as far as transafinite set theory is concerned, the subject-independence of its characteristics is much less evident than the characteristic subject-independence of the natural numbers. In the realm of the finite, both characteristics are subject-independent and can be reduced to combinatorics. The only difference between both is that, according to the classical Platonistic interpretation of mathematics, there can only be a single mathematical universe and that, to deductively study its properties, one can only employ classical logic. But this position is not at all unobjectionable. Once the subject-independence of the natural numbers system’s characteristics is posited, it becomes easy to overstep the classical phobia concerning the possibility of characterizing non-classical objective worlds. Euclidean geometry is incompatible with elliptical and hyperbolic geometries and, nevertheless, the validity of the first one does not invalidate the other ones. And vice versa, the fact that hyperbolic and other kinds of geometry are consistently characterized, does not invalidate the good old Euclidean geometry. And the fact that we have now several kinds of non-Cantorian set theories, does not invalidate the classical Cantorian set theory.

Of course, an universally non-Platonic point of view that includes classical set theory can also be assumed. But concerning natural numbers it would be quite artificial. It is very difficult not to surrender to the famous Kronecker’s dictum: God created natural numbers, men created all the rest. Anyhow, it is not at all absurd to adopt a whole platonistic conception of mathematics. And it is quite licit to adopt a noneist position. But if we do this, the origin of the natural numbers’ characteristics becomes misty. However, forgetting this cloudiness, the leap from noneist universes to the platonistic ones, and vice versa, becomes like a flip-flop connecting objectological with ontological (ideal) universes, like a kind of rabbit-duck Gestalt or a Sherrington staircase. So, the fundamental question with respect to the subject-dependent or subject-independent mathematical theories, is: are they created, or are they found? Regarding some theories, subject-dependency is far more understandable; and concerning other ones, subject-independency is very difficult, if not impossible, to negate.

From an epistemological point of view, the fact of non-subject dependent characteristic traits of a universe would mean that there is something like intellectual intuition. The properties of natural numbers, the finite properties of sets (or combinatorics), some geometric axioms, for instance, in Euclidean geometry, the axioms of betweenness, etc., would be apprehended in a manner, that pretty well coincides with the (nowadays rather discredited) concept of synthetical a priori knowledge. This aspect of mathematical knowledge shows that the old problem concerning the analytic and the a priori synthetical knowledge, in spite of the prevailing Quinean pragmatic conception, must be radically reset.

Noneism. Part 1.

Meinong

Noneism was created by Richard Routley. Its point of departure is the rejection of what Routley calls “The Ontological Assumption”. This assumption consists in the explicit or, more frequently, implicit belief that denoting always refers to existing objects. If the object, or objects, on which a proposition is about, do not exist, then these objects can only be one: the null entity. It is incredible that Frege believed that denoting descriptions without a real (empirical, theoretical, or ideal) referent denoted only the null set. And it is also difficult to believe that Russell sustained the thesis that non-existing objects cannot have properties and that propositions about these objects are false.

This means that we can have a very clear apprehension of imaginary objects, and quite clear intellection of abstract objects that are not real. This is possible because to determine an object we only need to describe it through its distinctive traits. This description is possible because an object is always chacterized through some definite notes. The amount of traits necessary to identify an object greatly varies. In some cases we need only a few, for instance, the golden mountain, or the blue bird; in other cases we need more, for instance, the goddess Venus or the centaur Chiron. In other instances the traits can be very numerous, even infinite. For instance the chiliedron, and the decimal number 0,0000…009, in which 9 comes after the first million zeros, have many traits. And the ordinal omega or any Hilbert space have infinite traits (although these traits can be reckoned through finite definitions). These examples show, in a convincing manner, that the Ontological Assumption is untenable. We must reject it and replace it with what Routley dubbs the Characterization Postulate. The Characterization Postulate says that, to be an object means to be characterized by determined traits. The set of the characterizing traits of an object can be called its “characteristic”. When the characteristic of an object is set up, the object is perfectly recognizable.

Once this postulate is adopted, its consequences are far reaching. Since we can characterize objects through any traits whatsoever, an object can not only be inexistent, it can even be absurd or inconsistent. For instance, the “squond” (the circle that is square and round). And we can make perfectly valid logical inferences from the premiss: x is the sqound:

(1) if x is the squond, then x is square
(2) if x is the squond, then x is round

So, the theory of objects has the widest realm of application. It is clear that the Ontological Assumption imposes unacceptable limits to logic. As a matter of fact, the existential quantifier of classical logic could not have been conceived without the Ontological Assumption. The expression “(∃x)Fx” means that there exists at least an object that has the property F (or, in extensional language, that there exists an x that is a member of the extension of F). For this reason, “∃x” is unappliable to non existing objects. Of course, in classical logic we can deny the existence of an Object, but we cannot say anything about Objects that have never existed and shall never exist (we are strictly speaking about classical logic). We cannot quantify individual variables of a first order predicate that do not refer to a real, actual, past or future entity. For instance, we cannot say “(∃x) (x is the eye of Polyphemus)”. This would be false, of course, because Polyphemus does not exist. But if the Ontological Assumption is set aside, it is true, within a mythological frame, that Polyphemus has a single eye and many other properties. And now we can understand why noneism leads to logical material-dependence.

As we have anticipated, there must be some limitations concerning the selection of the contradictory properties; otherwise the whole theory becomes inconsistent and is trivialized. To avoid trivialization neutral (noneist) logic distinguishes between two sorts of negation: the classical propositional negation: “8 is not P”, and the narrower negation: “8 is non-P”. In this way, and by applying some other technicalities (for instance, in case an universe is inconsistent, some kind of paraconsistent logic must be used) trivialization is avoided. With the former provisions, the Characterization Postulate can be applied to create inconsistent universes in which classical logic is not valid. For instance, a world in which there is a mysterious personage, that within determined but very subtle circumstances, is and is not at the same time in two different places. In this case the logic to be applied is, obviously, some kind of paraconsistent logic (the type to be selected depends on the characteristic of the personage). And in another universe there could be a jewel which has two false properties: it is false that it is transparent and it is false that it is opaque. In this kind of world we must use, clearly, some kind of paracomplete logic. To develop naive set theory (in Halmos sense), we must use some type of paraconsistent logic to cope with the paradoxes, that are produced through a natural way of mathematical reasoning; this logic can be of several orders, just like the classical. In other cases, we can use some kind of relevant and, a fortiori, paraconsistent logic; and so on, ad infinitum.

But if logic is content-dependent, and this dependence is a consequence of the Ontological Assumption’s rejection, what about ontology? Because the universes determined through the application of the Characterization Postulate may have no being (in fact, most of them do not), we cannot say that the objects that populate such universes are entities, because entities exist in the empirical world, or in the real world that underpins the phenomena, or (in a somewhat different way), in an ideal Platonic world. Instead of speaking about ontology, we should speak about objectology. In essence objectology is the discipline founded by Meinong (Theory of Objects), but enriched and made more precise by Routley and other noneist logicians. Its main division would be Ontology (the study of real physical and Platonic objects) and Medenology (the study of objects that have no existence).

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.