Summary

Questions Covered (beta)

x

Main point

Full script

Okay. So thank you all for coming to today's installment of the philosophy of contemporary and future science seminar series. Just a reminder that all of our past seminars can be found on our website, wwwpocfsorg. You can follow us on Twitter at pocfs, and if this is your first time coming to one of our seminars and you'd like to be added to our mailing list, so Kevin Brest, new events and our seminar series for next year. Just send me a message in the chat or you can send us an email, pocfs- lneduhk, and we will add you to the mailing list. Today we're joined by June on Suka. Associate professor of philosophy at Kyoto University and a visiting researcher at the reiken center for advanced intelligence project. He works on the philosophy of biology, statistics and machine learning. He's the author of two books: the role of mathematics and evolutionary theory, and thinking about statistics, and today, June would be telling us what machine learning tells us about the mathematical structures of Concepts. We're very happy to have him here. June, take it away whenever you're ready. Great, thank you so much, so much. Show more

Show less

Main point

Full script

Well, thanks a lot for the for having me here. It's a great pleasure, and so let me share my slide. Okay, so you see the screen now you see my screen. Okay, good, well, again. Well, thanks a lot for this great opportunity. I'm very much excited to this talk. So, so this talk is kind of really new project of mine that started on last year and they gave a positive presentation in the last PSA at Pittsburgh, but it's pretty much in the in a work in progress, so any feedback will be highly appreciated. Okay so, and also, if you have any question, just interrupt me and let me know and we can discuss. Okay, so the title of today's talk is what machine learning tells us about the mathematical structure of concept and the. What I mean by this is that. Show more

Show less

Main point

Full script

Well, my question is: well, how to model concepts? Of course, the concept is a big Topic in the philosophy and there have been various philosophical approaches. Of course, I cannot name everything that classical uploads is a dated back to. I store it and lock and try to define the kind, what concept they find a concept in terms of definitions, and there's a prototype Theory by Richard Einstein and Ross and also not so much for me recently- is a functional. So what I call functional Theory that's originated from in in lots and Castilla and 19th century German philosophers and also related to some idea, some similar idea is still here today in the name of series theory, of Concepts and probably less known as symmetry based Theory, which I'm going to explain later. But try to capture the concept in terms of certain kind of invariance on the, invariance against a certain kind of transformation. So yeah, the, basically what they try to this sell is a concept try to do is: well, we, we recognize many things. There are three: car and Cat and so on, and what's happening in our head? So what is in here? So so this is a basic problem. But if you turn your eyes to machine learning literature, actually they you find a similar kind of problems in machine learning called representation learning. Show more

Show less

Main point

Full script

Representation in machine learning basically is what is happening in the, in the model. So how do objects get represented in the previous deep neural networks and especially the latent layers of new, deeper neural networks? That is a problem. So a neural network recognize many things, as you know, like trees and cows and cats, and they kind of output a certain kind of you know the distribution of these categories. But in the latent prayer these objects get represented in a certain kind of vector. And understanding how these objects get represented is the key to generalizability and interpretability of machines and what are good representations and how to model this representation mathematically is a one big challenge in machine learning. So it looks like player kind of looking at the kind of similar kind of problems. So how objects are processed inside brain or machines. And my talk today try to bridge this kind of different approaches. Show more

Show less

Main point

Full script

What kind of transitions using mathematical models of representation to model philosophical theories of concept. So I focuses, I focus on full, full series of Concepts. One is a classical Theory which takes a concept as abstraction and property, sorry, functionality or symmetry, business salary. And my first, the, the first part of this talk, try to kind of how, to say, identify the mathematics that is needed to each of these salary. Show more

Show less

Main point

Full script

So classical Theory can be modeled by bullion algebra, which is which has been used in the old type AI, so not in the, the recent neural networks, but like a good of what people call it good old-fashion AI and prototype Theory. I get its mathematical representation in a vector space which is used in natural language processing and object recognition. And the idea similar to functional Theory can be found in the generative models, which takes a representation as a sub manifold of the representation space. And finally, symmetry based theory has a similar idea. It's it's mathematical nature can be cast out by using the notional groups, which is used, and then this and so-called this entangled representation. So, by comparing these 10 philosophical Theory and Mathematics, my goal is to identify the mathematics needed to model Concepts. So that is my goal today. Okay, so I started with a classic Theory and then prototype Syrian functions, in this order, okay. So first with the classical story. Show more

Show less

Main point

Full script

Classical theory is a very I think that everyone is familiar with the idea. The idea is that the concepts are formed by extracting common properties, abstracting individual things. So you get a lot of, you know, you see a lot of human beings as a human person and Abstract these images or these instances of human, you get the abstract notion of human and also you, you form the abstract notion of holes or other animals like cat and dog, and then, by extracting common property, you get an option: animal and then organism and so on. So so this is a very classical idea, old idea, and I sort of has this kind of idea and the highest and PhD dissertation called this idea Aristotle and obstructionism, and luck also has this kind of already. So this is the obstruction side and also from the other way around. Show more

Show less

Main point

Full script

Well, you see that these kind of the lower layers are instantiation of the higher, more absolute concept. So concepts are instantiated by adding concrete properties and the defining Concepts amount to be determined in the location, in this kind of ladder, by identifying the necessary and sufficient conditions. So that is the basic idea, foreign. So well, this is very intuitive, I think. And the what is a mass marker structure? Well, this is Cairo, very, how to say, kind of naive Mass Market. Show more

Show less

Main point

Full script

It's not a difficult mathematics, it's just a boring algebra, because abstraction gives a kind of partial order, is a relationship. So human is an animal, animal is an organism, and so on, Jun is a human, and so, and if June is a human, gen is an animal, that's all. So this is a partial order and the concept formation is done by operation, joint and meets, join, taxes, two concepts and give you a wrist abstract concept of two instances. For example, animal can be made by abstracting octopus and hosts. So, and also you can join of France and animal and get an organisms, and so on. And the other side is the operation, meat, which takes two is two concepts and give you back the most abstract instance of two concepts like human equals animal energy. Well, if you combine animal and National, you get human. But this is an aristotarian idea. In this way you can have to say the, the organize. Well, you can. Well, you can organize a concept in this kind of Boolean lives and maybe the by combining holes and human, you get Cantor. And combining kind of octopus and you can't avoid, is not no such thing, so there's nothing and so on. So this is a Boolean algebra with nothing and sing as a zero and one. So this is a kind of the first approach to a model concept within algebra and actually it has been used in the old style machine learning- good, good, old-fashion AI, for example, moving Minsky, try to organize Concept in this way- and also in a recent ontology in information science. What they use is area, is this relationship is a relationship to organize Concepts. So it is still used. It is used because this is intuitive, it is intuitive and interpretable and also mathematically speaking, it has a very nice syntax, semantic relationship via Stone Duality. Show more

Show less

Main point

Full script

So this, the syntax, is the kind of relationship between you know just words or symbols, but if you take the, the extension of this, this world, and that is, you give a semantics. Well, you give a, you get a good topology that represents the semantics of the, this Boolean algebra. So this is: you have a nice syntax and semantics pair, which is good, which is great, but nowadays it's not popular because of many problems. Well, I do not mention the, the ultimate criticism out about abstractionism, that it is very difficult to give up a definition of a concept. It's well, usually very difficult to give a necessary and sufficient condition of human being and so on. But other problems include, for example, it allows for arbitrary abstraction. For example, if you have beef and Cherry, do you get red juicy food? Well, you can take join. Well, but this is this sounds like a nonsense and it doesn't capture the inference lower concept. Well, it, it captures the a priori inference, but it's not inductive inference. It doesn't have inductive mean for us, it just categorized Concepts. And the most difficult problem, the, the, the, the biggest problem is that it is not easy to learn this structure from data. So that was the reason that got good over the fashion, AI became nuts was not successful. You have to, you have to con, you have to make this structure by hand or you have to supervise everything. So that is a problem. So, okay. So that was the first idea, abstractionism, and which can be captured by bullion, algebra. Then the next is the plot type, sorry, the Prototype Theory. Show more

Show less

Main point

Full script

The idea, I think, is from your own. Again, the idea is, the concept is a cluster of similar things or, if you are interested in a kind of an image, or maybe it's like Impressions or something. It as such, the cosmic cat, may not be definable with necessary and sufficient conditions. So that that was a point made by wittigenstein. The concept can be. Well, you cannot Define the concept of game, but it's the only there is is a community resemblance among different gigs, different instances of games. So how to define a membership of a concept? The Prototype series says that the membership is determined by the distance to the Prototype. So, if you have, if you take up the concept of birth, there are, there is a certain kind of very typical image of birth. Maybe I don't know this one, but this world is the evolved right. But and whether something is word or not is determined by distance. How similar to this prototype? So well, chicken Maybe. Well, it's not very typical word, but because it doesn't fry, but even still it looks similar to the other words, so it's about. Or penguin, well, maybe it's a, it's a bird, but this guy is not a bird, Maybe, and so on. So this is a, the very rough idea of prototype, savory, okay. Then the question is: what's the mathematics behind this? Show more

Show less

Main point

Full script

The mathematics behind this is a vector space and this is used in the so-called distributed representation and basically you have a vector space, n-dimensional, higher dimensional Vector space- which has a metric, that is, you can measure the distance and the concept is a region in this High dimensional Vector space and the metric is used to measure the similarity, for example, in a cosine, similarity, which measure the distance between this point and this point by using the, the angle between these dots or these two vectors. So each item or each image or each word is represented as a vector, like this in the high dimensional space. So, yeah, each data point represents image or word in natural language, processing and similarities is measured by Metric and that the cluster is a, is a kind of similar images or words, and then it forms a certain kind of concept. So this is the idea, so the, this is the idea. Then the idea is that prototype Theory, the, the in order to prototype, sorry, sorry, in order to multiply. You are assuming some certain kind of vector space with a metric. I think this is a familiar point in the literature, but okay, so, yeah, and the Okay so and this. Show more

Show less

Main point

Full script

This has been very successful and one very good thing about this is that it is easily learnable from data. So in recent natural language processing, well, you get, you have a huge data set on text and basically what the- what it sees, well, natural language process algorithms say is that the co-occurrence of words, and these co-workers give you a huge Matrix which can be reduced to a kind of certain kind of represented in a in a certain kind of vector space, for example, about to back. The algorithm does that and the. You get this kind of the vector space. And the interesting thing is- well, this is a, I think the one, the final point- that it allows for certain linear operation. So it is really a vector space. For example, if you subtract from the word King, the month as men, and add women, and you get cream and so on, this is because it objects are embedded in Vector space and the nature of the vector space allows linear operation. So this is cool. This is cool, but it has also its restrictions. One is it is hard to deal with other logical connectives such as negation. Negation is very difficult, so because it's not so clear what the negation means. For example, what is a negation of? Well, women, is it man? Well, it's not the negation right. First negotional Force looks like slow, but it is very difficult to deal with this kind of relationship and Vector space and this Junction is also a nightmare. And the other problem is, and the related problem is, compositional compassionality. So well, you can represent words like this, but it's very hard to represent the sentence means. In order to get the representation in the sentence, you have to compose this word, but this kind of where you're composing word is still not. Now, I think this is well. This is probably related to the, the difficulty of dealing with negation and so on, and also obvious point is that they are mostly discrete data, like here at the language, so it doesn't deal pretty well with continuous data. So so, yeah, but even still, it is still successful and it is having used quite widely, but I think there is still. There is still something that is needed to address with respect to this. Okay, so this is a plot type Theory, and now I want to move on to the third one, the functional series, that if you have any questions. So far, I think that's a natural point. Okay, okay, so then? So the functional salary, the idea of okay, so then you are muted. Sorry, I mean I, I do have a question, so I don't know if I should ask at the end. It's just like the Prototype Theory seems to be a version of abstractionism, because, like, so I'm not sure what the difference between these two theories are. Because, like, if the membership is defined by a distance to a prototype, then you get necessary and sufficient conditions. Not really, because that. Show more

Show less

Main point

Full script

So, first of all, they are all kind of vague boundaries, so, for example, there might be a non-typical instances, and that is that might. So the one problem of the classical abstractionism is that it doesn't deal well with a kind of boundary cases and so on, because that it is very difficult to circumscribe the, the area of a necessary and sufficient condition. But prototype Theory actually will deal well with these kind of things because, and it can even, it can even other overlap, so some something might birth and at the same time, and the dinosaur is, it's possible according to processary. Does it answer to? Oh, I disagree, but maybe I'll ask my question at the end. So it might be okay. Good, any other questions? Comments? Okay, so well then the. Show more

Show less

Main point

Full script

Let's move on to the functional salary. The functional, sorry. The idea behind this is that. Well, so both in the classical Theory and the prototype Theory they deal with the kind of combinations of properties. So in in the classical Theory it's Korea, and the Prototype Theory, actually, when the psychologists tried to Define, define what they identifies, or identify the cluster concept, well, they use the kind of, the properties of, for example, how to define, how to determine the distance between chicken and the- I don't know- penguin, and so they ask you to list the properties. For example, they have wings and do they fry, and so on. Then how, how much property they match, and that is how you, how they decided the distance between these Concepts. But the idea behind the functional server is that not all combinations of properties is Possible, only a certain kind of limited combinations of properties- home cost, and that is a properties of concepts- are constrained by a certain functional relationship and this functional relationship, functional constraint, characterize a concept. That's the idea. So, for example, here, if you see various instances of animals, holes, and the various instances of animal has a certain kind of properties or features, the features that can have, that can be had, for example, at the device of locomotion and respiratory system and so on, and each animal has a certain, you know, way of moving and also the respiratory system and so on, but not all combination is possible. For example, it will be certainly impossible to have an animal that Thrive and use skills to for a respiratory system. Well, it might be possible that you know, we don't see these kind of any, any such animals. So so this is impossible. So in this kind of functional relationship between or among features, is what characterized the concept of animal, that is the idea, and this idea is expressed by German philosophers like Lotsa Casillas: functions. Big leaf is the, the famous One, and also the recent theory of the series cereal concept, which identify concept as a kind of essence, or they. They try to identify Concept in its role in a certain salary. Isync has a similar idea to this functional salary, but anyway, idea is that there's a certain restriction or constraint among possible features and the concept embodies this constantial relationship or function. That exactly now do we have a similar idea in machine learning? Show more

Show less

Main point

Full script

Actually, the answer is yes. The idea is called manifold hypothesis, the idea behind the. The money for the hypothesis is that actual data, actual data data, are in a very high dimensional space, but data do not occur londomery in this High dimensional space. Actually, if you look at data, they lie in a very small sub region of the entire possible space. So data lie along the very low dimensional, low Dimension, but even still high dimension was like 80 or 100 maybe, but low dimensional region called manifold, inside a very high dimensional right latent space. That is the idea. That means that most of property combinations are not just possible or they are impossible. So and they so. Okay, so what's the relationship to the function? Well, actually it- this is a kind of mathematical, mathematical fact that the embedded sub manifold that is, for example, hyperspace in in high dimensional space is, can be defined as inverse image of a smooth function. So so this area, this surface, can be determined as a inverse image of function. That map these things to this Dot and the. So this is a hypothesis and the this hypothesis motivate the so-called money for learning, which try to identify such manifolds from data. So okay. So if data lies along low dimensional region? Well, the goal is to identify saturation. So this is mindful learning. So here is a kind of toy example, the handwriting, handwriting the numbers for, and so there are different kind of, you know, image. So they, they are different as images, right, but they are all false at the instance of number four and they lie on the kind of sub manifold that represents the figure four. So that is the ID, okay so, so this is a kind of machine learning version of functional Theory and okay, so again, so the end. Show more

Show less

Main point

Full script

If this is the kind of the, the, the picture that where this some money for live. So the machines had an input here and it processed in the latent space and latent space for a certain kind of the latent, the high dimensional space, and the sub manifold is in the sub area of this latent space and the representation is identified as such manifold. So this is the only idea. Okay, so well, this motivate us to can Define them: functional, slash, manifold, stereo concept, According to which concepts are serve manifolds in latent space. Show more

Show less

Main point

Full script

And there are several good things about this. It takes into account functional constraints, well, of course, by definition, and it also takes into account the similarity, because it's manifold, it's a geometrical object, so it has a matrix, so you can measure the distance between this image, for example. Well, this four and this will look similar. They are close to each other, this water, this will cross to each other and so on. So they take into account the similarity and the other two. A good point is that it's learnable from data- I'm going to discuss shortly- and also allows for continuous transformation, which are also discussed shortly. But it has a problem, but we can discuss it later. Okay, so let's first see learnability from data. Show more

Show less

Main point

Full script

So how can you learn the manifold? This is a quite a fascinating area of machine learning and the, which basically is the, is used in the recent, the generative models and indeed, and what I represent introduce here is a, one of its kind of like a variational auto encoder, which is a classical example, date of birth, like 10 years ago. But but basic idea is that the, this variational in Auto encoder, is trying to Output images similar to the inputs. So it gets inputs and it can compress the data in the latent space, in the N Dimension, which is smaller than the n, and then it decodes the data and the reconstructed reconstruct the image and the goal is that to how, to how, to say, to Output the image that is as similar as possible to the input, so it kind of mimics the impact and in doing so it adds certain noise here and when it creates a image in a representation in the latent space, and that is what makes a variational encoder very strong and this noise makes a this, this approach, very kind of useful in the generalized generalization and representation, so on. But anyway, so the idea is that you, if you get that input to the handwriting two, it encoding the latent space as a vector, and then try to Output the image that is similar to the input and in the latent space. Show more

Show less

Main point

Full script

So if you train this model with data, you get to represent the relating space like this, and this latent space represents that where the each icon, each image lives in this latent space. For example, this particular number, figure, handwriting two is encoded in this way and it may, for example, this data set a something. And what you see here is: well, actually this is a two-dimensional projection of the high dimensional latent space, but what you see here is that these data clusters and it it's not the cluster, it's not just a cluster, but it's a kind of some manifold in the latent, in the latent space. So yeah, so that's the idea and the the Fascinate things about this? Is that okay? Show more

Show less

Main point

Full script

So you have a trend data, sorry, trained model- and then you can directly choose a point in the latent Space by London regenerating a seed- so a lot by London is sampling point in the latent space and then feed it to the decoder and what happens? Well, you can, you can get a new image, for example. Well, we see that this, this data is this, but you created a random number like this, you, you take a long example, the latent space, and you get a new images. So in this way, it can generate the new data, new images. So this is why this is called generative model and the recent development like stable diffusion. And the diffusion model is not a ba Bae, that basically it. It has a kind of similar idea. So it gives a random seed and then decode it, will denoise it and reconstruct the Creator image and so. So this is a diffusion model, is a the latest version of generative model, but anyway. So this is how it works, Okay so, and what is interesting about this is that that concept or representation and latent space is a real manifold. Show more

Show less

Main point

Full script

And what do I mean by this? Well, by manifold, I mean that you can the the more thing makes sense. For example, suppose in in the face recognition encoder you get the latent space, like this, for example, this is a real picture and this is also a leopecture. And suppose that this data is in here and this is actually the same here, and then by moving this point data to this data, you can change the image continuously. So you can change this, this woman, to a man, by sliding onto the later on on the latent space, on them on the manifold, and likewise here you can put the sunglasses on this guy. So this is what the reason that we call this manifold- it's a continuous movement- makes sense. And this also suggests that each dimensionable manifold, for example this Dimension, this movement and this movement, has a particular meaning, specific meaning, and it represents in independent attributes. For example, if I move this point to this point and if I got this, this image, from this image, well, it is quite. You know, it makes sense to say that this Dimension defined mariness or female-ness, right, and likewise this Dimension represents the eyeglassesness or something. So this kind of the meaning can be taken out from the, from the manifold, which is cool. Show more

Show less

Main point

Full script

So I think this is a very cool method, but also there is a challenge. The challenge is that you don't get, you don't always get- the good representation. What do I mean by good representation? Well, good representation should be disentangled. What do I mean by that? That is, that representation has certain attributes or Futures, but these Futures should be mutually independent. For example, well, in the previous example, grasses, gender, hair color- well, they are different, you know distinct features, but what is a property called the skin color and the expression or Asian gender? We don't think that this gives a kind of one feature, well, this kind of hard part of different things. So I want to machine, to give you, give give me the only this kind of the optimistic features, not that this, this kind of hotspotch, but sometimes, well, it is often difficult to to identify this Atomic feature. For example, sometimes if you change the skin color, you also change, in this case, for example, the hair and the, in some cases, gender here and also the Asian gender entangled here. For example, if you, I don't know, for example here, this image, if you change the age, here seems like the day, the end gender of the image seems to be kind of changing. So the challenge is that how we can get this kind of this, not this kind of entangle the image button. This entangled some manifold and it is very difficult. It it is a challenge, but there still isn't hours and the the last component of the first talk, first part, is that okay. So what is the idea behind this entangledness? Show more

Show less

Main point

Full script

And that brings us to the final Theory, the Symmetry based Theory. The idea is that concept must be invariant, inserted group of transformation, and this kind of transformation is called symmetry. And I think that the Valkyrie has this kind of idea. I think, I think that he's the one for the first one who who's out this kind of idea? But but symmetry has been discussed recently by from Frozen and also Johnson, Benjamin Johnson, and so on, that the idea is that, well, look at this, these chords, well, this car, they look different but they are the same card. We identify that these, this is, these are the same. Cop that. What this means is that the property of the car is invariant under rotation. If you rotate the image, the result is still the same. And this kind of symmetric consideration plays a very important role in the modern physics and chemistry because a symmetry determines, they think, the nature of objects. For example, this chemical configuration is determined by the well, again determined by the invariance on the. What type of transformation? Disk structure is invariant. So this is the Symmetry based ideal concept. And then the question is how to model, how to cash out Mass markers. Show more

Show less

Main point

Full script

Well, symmetry is the invariance. So here, this is a molecule, maybe water molecule, and the. We want a classification method and our classification method is a function f and if we apply this, this machine, to this data, it says water, but the water molecule can, you know, rotate. So Earth. Let's represent this Rotation by transform, transformation, okay, the function, transformation, function Z and G. This, this function, rotate this kind of water molecule and then apply the classification and the. I want the, the result to be the same, whether it should still give the answer water. And this is the idea of invariance, that is, that your classification does not depend on the transformation. Stay, environment. This is one idea of symmetry. Show more

Show less

Main point

Full script

The another idea of covariance, the covariance, although certain, kind, with the change in the objects. For example, consider location detection. Right, it locate, detect the, the location of atoms. Then it gives you the location of the oxygen atom and the hydrogen atom and so on, and you give a transformation and then your location should be different, but in a systematic way. So the results should be the same as you apply the transformation to this original data. So this is the idea of covariance. So these are two kind of symmetry which, well, depending on the purpose we want to be, we wanted the this cross-secretory mechanics mechanism satisfy, okay. So then, how to implement this in machine learning and what is the desired entanglement? Show more

Show less

Main point

Full script

We use this, this idea of covariance, and to define the distance entanglement. This entanglement means that actually there are several Transformations, you know, transformation of just changing hair color or putting eyeglasses and some diesel transformation to images, right so, but this transformation, or on the different, can you know, come on the different kind of categories. So one kind of transformation, change hair color, the other kind of transformation, change the eyeglassesness or so on. And likewise in the, in, in the, in the, in the location, the one, one kind of transformation, rotate and the other trans States and so on. So there are different kind of transformations and the idea with this entanglement is that, well, in the latent space, there should be a decomposition, or marching decomposition of the latent space into distinct dimensions and the attributes, such that each transformation in one kind acts only on that Subspace. Show more

Show less

Main point

Full script

That is, if you translate, sorry, if you trust, give a transformation like g i- let's say that this is a that change hair color, then it should act only on the particular Dimension, this, which should be a hair color, yes, and the other dimension should remain intact. So that is the idea behind this entanglement. Descent entanglement mean that latent space is decomposed into distinct dimensions and the transformation act on this one of these subspaces and also the action on each Dimension should be covariant one. Show more

Show less

Main point

Full script

Okay, so how to implement this in machines? But the idea is that, well, you get the input and the input can get, can undergo transformation. For example, some image can you can change maybe gender, or you can age this person, and then your rating space must be such that, if you this, if you classify this into this point, your reading space must be, must follow certain the same kind of transformation. D2 should bring it to here, the one should bring it here, and the latent space should be decomposing this way, in such a way that G2 Act only on V2 and the G1 to only to D1, and so on. So, and if that is the case, we can say that Z1 is an attribute H and C2 is an attribute gender, and so on. So this is an idea of this- and tongue representation. Okay, so, yes, I think that's that covers all the top, all the way, of mathematical models for the concepts. Show more

Show less

Main point

Full script

One is Boolean, the second one is the vector space and the third one is a manifold, and this Force One is symmetric. So, okay, so, maybe is it okay, I, I have, I think, five more slides to go, but if you any question, maybe I can go to the end. Yeah, sure, we've got plenty of time. If you've got some more slides to okay, good, so, so, yeah, so these are four, four ideas, and if you, I think these ideas some are algebraic, for example, Boolean. Show more

Show less

Main point

Full script

Boolean representation is: wow, this is rzrag mathematical structure, and the Symmetry also has an algebraic notion, because our group is an algebra and and there is a, the side, the, the, the operations and transformation, and well, here in the act on discrete items and song, whereas in the back to space and the manifold, it is a very geometrical concept and the geometric aspect of concept U is a similarity of concept and the dimensionality of concept and it allows for continuous change and Transformations and so on. So if you look at various kind of approaches or options using the, you see that some approaches are focusing on the algebraic side of concept, in the other focusing on geometrical side of concept, it one. Show more

Show less

Main point

Full script

I'm not trying to determine which one is correct way, but I'm just saying that here that different aspects require different mathematical structures and the challenge is how to combine these aspects. I think this is a very interesting challenge because that some say that, well, the Boolean, the classical, the concept literature, you see the debate between the, for example, during approach and or sorry, the classical approach, and the Prototype salary and the series Syrian song, and they're kind of debating for as well, which car, which concept, of course of this correct, or so on. Well, it may be that these aspects from the mathematical perspective can be unified, if you can unify the algebraic structure and a geometrical structure, and I think this is a power, power of mathematical representation. But this is a kind of the open question and the. Show more

Show less

Main point

Full script

So now I want to move on to the philosophical implication part and but before doing so I want to say something via V aware. So in this talk I have been comparing the concepts in Psychology and philosophy and also the presentation in deep learning literature. But I'm not saying that AI models give you, give us a model to model human cognition, for the simple reason that deeper in neural network is not like the brain, it structure is speaking. They are very different. But rather I'm just saying that I'm suggesting that AI model can suggest what good Concepts should satisfy, what good Concepts should be and what mathematical Machinery is needed to model them. And and so far I identify four digited authors of good concept: logical operations, similarity, function constraints, this entanglement and the corresponding mathematical Machinery during larger electric space manifold group. And so now I want to explore other philosophical implications. One is where to find Concepts, and the second is the relationship between concept and causality and decide. The third is empiricism versus National rationalism, sorry, rationalize. Okay, so first where to find cost, and I think we can have a, so we can get some kind of suggestion from machine learning literature. Show more

Show less

Main point

Full script

One is that where to find a concept. The philosophers and psychologists alike have tried to end, in my opinion, fail to Define Concept in the data space. They try to Define concepts with Express, explicit features and levels, for example in trying to identify necessary and sufficient conditions, or when the Prototype servers try to feature matching with the expert features, like having wings or fries and so on. These are trying to, you know, Define or identify concept with explicit features. But that did not work very well and that was a cause of skepticism and eliminatism and so on. But if you look at the machine learning action, representation cannot be defined in the data space. It it can be found only in the latent space. And in order to get the latent space you have to deduce the dimension of the data and the features in the machine learning representations are constructed from data, not appearance or not, not, not not evident in data. You have to make them some data and they do not need to have a explicit levels. And this introduction relation, a latent layer, was a key to the success of generability of machine learning. And I suspect that the human also have, you know, this kind of latent space in our head and also our concept. Where to we form the concept in the in this latent space? This is, I think, the hypothesis and I think this is plausible. So if that is the case, we should look at the different phrases we have praise to Define Concepts. So this is the first kind of implication and the second second implication is the relationship between concept and causality. Show more

Show less

Main point

Full script

I don't think that many services in the philosophers have kind of much attention in the role of causality in defining concept. But I think the causality is important and that has been pointed out in the recent machine learning literature because of the problems of design town government. So this entanglements say that each features, each feature should be independently modifiable, but this is not always the case. This might be too strong requirement because some features are closely dependent, like Asian hair color or closely dependent. So it might not be realistic to independently modified attrition in the data. They are not independent. Show more

Show less

Main point

Full script

If so, you we have to model this dependence relation. Causal depends, not the conceptual difference of causal dependence, and that is a gold Coastal representation running and that tries to explicitly models and learn quota relationship among latent features in the latent space. It's a combination of during deep neural network and the cause of discovery and the. This is a the new field and so there's still much, many things to be done and must be done. Then the issue is that it requires a very strong inductive bias, that is, it requires strong assumptions. It cannot just lead off from data. So it's a very challenging task. But anyway, I think the concept- cereal concept- should take care of this causal aspect of concept which is apparently in in a natural kind of one. If we take the natural kind as a concept, natural kind is a causal notion and so of course the, the idea of causality, is built in the idea of concept. So I think that I, I I think that this, the causal aspect, must be, must be taken care in in in creating a Serial concept. So this is the second thing. Show more

Show less

Main point

Full script

And the third and last implication is about the very old issue. So can't, famously based his epistemology on isotarian logic and the Ukrainian geometry. Euclidean geometry defines the, the form of our perception, and the aristotarian logic gives a cut his categories. But what the list machine learning literatures is that it suggests is that we might need a more powerful or sophisticated mathematical framework, not just our student logic when you create geometry. But anyway, the manifold is not the Ukrainian geometry, it's a non-nuclear and geometry that kind of suggests a project of enriched kantianism. But then the question is which mathematics is needed, and that, I think, is a very interesting question which I want to pursue, and also some implications about empiricist versus a rationalist question. So once we identify the necessary mathematics to represent or to form concept, of model, concept, the question is whether this structure can be learned from data for free? And seems like the answer is negative because, for example, when we think about this Zentangle representation and even the causal representation, it is not impossible to get the same time where the presentation in the free, unsupervised way. You have to give some kind of inductive bias and this is a certain kind of a prior constraints on the machine. So, in order is the case, maybe we also have a certain kind of inductive bias or a prior constant constraints in order to have these kind of ideas like design, language, representation and cause of representation and so so in a way we need certain kind of rationalized element in understanding to understand the conclusion, sorry, to understand Concepts, okay, but these are still, you know, very much vague and not in any kind of should not be taken as conclusion. But this is a kind of research question. Show more

Show less

Main point

Full script

So interim conclusion is that well, I think that machine learning tells a lot about the mathematical structure of concept, but also there are many challenges. What mathematical structure needed to suggest a sufficient three model Concepts is a question, and also how to combine different aspects of Concepts, especially geometric and algebraic science, is a challenge thing. But I do think that the G's challenge, understanding this Challenge and solving this challenge contribute both to the philosophical questions of what a concept and also in the machine learning researcher. It is very important to ask Google to, to, to ask what do machines think? In understanding the, the inside of machine. That is a huge Topic in the recent Express ai explainable ai project. So I think that in order to understand, to answer these challenges is very important to understand these kind of challenges and this. This I discussed in the in my in my recently published book, in chapter four. So if you're interested, please take a look. Okay, with that, say, I think this is pretty much I have. So, yeah, this is our slide. So thank you very much. Show more

Show less

Main point

Show more

Show less