Hello everyone, welcome to all for this new video of our faithful series of introductory training or diploma ning. First of all, think about recovering the notebooks on the niqab site, on our reliable guide, to have all the examples that you can spin it at the same time as you make it or play with it, and then, obviously, the support in pdf, which is which in the back, which you will be able to see like that, much closer and better than in video. So we have seen a certain number of things until now.
The last time, last sequence, was on our sequential recurring networks. There we are going to see something completely different by interesting us in the hour highways. So it is a particular architecture which will allow us to do unsupervised learning. Then a lot of things in this dance is 5th sequences that we will break down into three videos.
We will see at first, therefore, an architecture and its unsupervised aspect that we will also take the opportunity to look a little bit at the programming aspects of the pia and functional aspects of queyras, which makes it possible to do very advanced things, much more interesting than the classic sequential pia that we usually use, finally, any case that we have used so far in our examples. And then we will see three examples: first, example of implementation of a colorful network with this functional procedural API, and then we will see two other examples allowing to implement more complex models, much more elaborate. But first we are going to be interested- we are going to start with that- on what is an encoder, risotto and how does a vulture runner network work. So I said we are in a kit, the domain d of the models, two types, classification, type, regression, and we are going to be interested in something really different, unsupervised, and of which come t in the category of generative models.
Then what is a network around an encoder? In fact, it is in fact a network which will be composed of three parts and we will have a first part which is an encoder, a second part, which is going to be- is it not that time? And a third part of the third part which is a decoder. Then to start, how these three things are going to work, in fact, the general principle and the following is that we are going to take data as input, so large data we're called images, but it can be any type of data. We're going to reduce it with the dice. It's concentrated. We're going to say gradually, with a set of networks, seem layer of way, to recover. At the end of this, at the output of those, of this encoder, a data structure of very small size, a vector which will be a projection, will say to give us in this space, is latent, not know what awaits it, a small size, To a few tens or a few hundreds of two components. And then we'll do the reverse. Then, with another decoder, we'll start from this space-time, of this concentrated vector, we'll say who concentrated the essential, has the essential characteristics of us, of our starting data, and then we are going to react in ten years var and sampled this data so as to try to reproduce the starting data. Here is a general idea. So let's look at it a little closer and try to see a little bit in detail how make an auto encoder and how to make this mechanism work. So the first thing, obviously, is to look at the encoder and see a little bit what it looks like. We are going to use it in our example and if a fairly classic used case, we are going to say it's the lack of the most classic relays.
We work with images and therefore for sampled subsamples to give us we will use a network that we wanted, tiff, quite simply. Then we saw what tele convolution is. I saw what a rhône convolution layer was, covo du tiff. We take a starting image- we had our windows that we will move onto our image, and then what is inside this window. At each step we are going to take what there is from a third of this window. We are going to multiply by a pouan, carnelle matrix and then the result is therefore going to be this weighted s will be added, worried, we will add a ticket, of course, and we will pass this result. So this weighted s plus the bias inside the activation functions and the result, not composed progressively like that, our, our image between counterpoint planes that you at other layers that have meant. So that's the classic convolution, some important parameters in convulsions.
So that's a reminder. You can find that in sequence number 2, the second sequence of our series, the first st parameter is the padding which will allow outside padding in French- which will allow, like that of two, to widen our initial image a little bit so as to be able to ensure that the output image will then be blue l the output image that we cultivate can have the same size as the input image. So that 's a parameter of the carraz teeth which is called not worthy of parameters that we give to our layers of a convolution layer. And if we give the value same as fanning of the thousand islands, is calculated so that the output image, the other multi ve layers tale drop and the same size as the, the input d 'other layers that we steal useful. So that's what we see in our example. Here we see that we have not worthy of 1 and which allows, like that in this example behind my back, to recover an image as output to the blue camp which is the same size as our input image. Then another parameter, then a way to aut res examples.
Here we have it. We don't have to panic this time. So our other window of two convolutions is actually walking around, but in a more constrained way. Inside our original image they see that the output in blue is obviously from size, is less than our, our input image. If we have this padding parameter which is equal to valid manning is the image, the size of the image there is not, is not preserved. So this is our first parameter. A second important parameter of our, of our political layers, is the style which makes the step, or the stride in French, with which we will move our window on our input image.
So there we have, for example, behind my back now we have a betrayed ace of 2.2 in kd2a. Each time, at each iteration, at each step, we will shift our window by 12.2 pixels. So here are pixels to the right. Then when we have reached the end, we go down and there we exceed 2 pixels at the bottom and we shift again from d two pixels on the right and there, obviously, by doing like that, with a stride greater than 1, have not therefore de facto very significantly reduced our initial image that we are going to. Therefore, a small example: that we have an input image that makes sense pixels and our output image. It has made more than two pixels more in a tai chi reduction factor at the size reduction, which is very, very important. That's it. These are the essential parameters.
There are other parameters that may exist, but that 's really two big parameters with which we're going to play the stride in particular. So obviously I 'm not going to name but the size of the combustion matrix, also an important parameter when we're going to configure a communion matrix. We really have essential parameters. We can't make a convolution layer if we don't specify the size of the kernel, the fact of having had to padding or not, it hairy stride which does not have the stride with which we are going to move this window on our input image. Here are the parameters of our convolution layer so that, in our turn, encoder. Well, it is based like that, on convolution layers. Typically, it will take an image to enter as input and then, by applying successive convolution, like that, we will reduce the size of our image so as to obtain something very small, a vector of very small size, which which will be a projection of our input image in these famous spaces. There you are. So, in fact, it's relatively simple.
It's a little bit what we've done until now, except that we're going to limit the depth of the slack from the number of layers to the commissions. Therefore, cost of convulsions, depth on arrival and one so that we can put everything flat to constitute a vector of small size. Then the opposite of a, our side, on the other side of our space, is waiting for it. We are going to decode this vector, this projection of our latent space, so as to reconstitute an image. The objective is going to be to reconstitute an image which corresponds to an image without one, not the input image, sorry, but which will correspond to the image in raw training. And therefore we are going to give a raw image and we are going there will reduce, vallar and enlarge. On the other side, the objective is that our output image of our decoder correspond to the input image, but the young crude is obviously not going to reproduce the noise precisely. We want to get rid of it. So our decoder will rely on data layers.
A lot of convolution, a little bit particular, the particular which is called layers, transpose, convolution, transpose. So we dug up, we can find heaps of words for co-productions, inverse convolution, and this is the appropriate term. The real term is convolution, transpose, transposed, scale. She then, the low and golden objective, sampled to redo from nothing, grow our image, to reconstitute an image from the essential characteristics that we have recovered from this space. Awaits her then, the principle. Then, there it will be a little bit subtle. The principle is going to be the following is that we are going to take a pixel and other input images. We are going to multiply that, the value of those, of this pixel, the value of its facets, moreover. There we are going to call it. We are going to do a multiplication on our notebook. Léon will project this kernel on our output image. Then we will then shift our peak it's here and we will shift our projections and therefore we will do a sgradually like that in the projections of our notebooks. So here we are. Stay a little on the same philosophy. Simply, we start from one to scan, we start from a unit value x, the matrix of our notebook. We will project that on our image of concussions on us, on our output images. We v to say in our cocoon, volution 30 to transpose here. Then we can show that you have the link, you have the link here at the bottom left.
We can show that this, this operation, in fact amounts to making a classic convolution with one or another use of parameters, of straubing padding that, as we have seen previously, a convolution transpose, such as those that we have on the left, and tickets, which can be equivalent, and a classic convolution in front, provided that we choose the parameters well. So here we have, we can find, like that, a double interpretation of the parameters, because the direction of the stride or the padding that we have on a convolution transpose to the left or to the right to obtain the same thing, find out more, the same as the one we had earlier in the classic convolution. So some parameters to try to understand.
So there, if we apply, we apply this philosophy, go a classic co-production ue, and this is somewhat the direction in which these parameters will be used. The strike will have a somewhat particular use. It will consist of dilating another starting image. So there, for example, it is much easier to see that with, with examples, if we look at the example that leaves here. So we will work like a classic convolution. We have our input image, which is orange, which is at the bottom, with another output image, which is in blue above, and then we have our. We have a notebook which is embodied- the 3-3 in our example- which is gray and which will move a solution of our image of exactly that badly. Then, as in the case of a classic convolution, on the other hand, we have applied a stride. Then there, stride in convolution mode, transpose, and we apply a strike of 2, ie no pixels. The of being hours in contiguous pixels, they will be, they will be detached is one of the others. We go, we go dilate our starting image. Here is d, so with a trail of 2.2, we did an irritation, and two under the 2 and 2 angles will only be 20, its pixels against pixels, we agree. So strike of 2, c and white pixels, white pixels, white pixels. We added the day, among other things, our pixels here. And then there we do not apply. A worthy step then in philosophy, in their going to say, in the operation of convolution, transpose, transform into convolution, in classic. Do not try to understand how the step works worthy in detail. It's going to be complicated. Just consider that the default mode in some we have here, for example, we don't have a name- is going to have no panic and it actually corresponds to finally having a kind of panic, since we will enlarge our image so that our kernel can still grab pieces of our image. So there in our example here. So there we are in nomad mode, signed in stride, 2-2 mode, and we will browse our image as input e and generate image, come out of it. So there, we see immediately that we had an input image- it was 2 pixels of pixel art- and we have an output image which this time is 5 pixels by 5 pixels.
Another example: so there, toulon, without panicking, with a strike of 1.1. So this time we have our climates, which remain completely joined, was not, it was not. It's the same thing as earlier, except that it was not, it was not enlarged and over there, when we are going to always go the same way on the route by moving our window, so with a step of 1, for once, I am busy later and there we are going to reproduce an image that is larger again, but now we wanted to have an image of everywhere. We have an output image that will be 4 pixels by 480, a little backwards. Don't hesitate to go and see this document by vincent du moulin and francesco mise in, who takes stock like that on the convolutions. It is a good document, that a small, little big and makes about thirty pages of a publication, but which is very, very well done, very clear and only very clear diagrams and precisely on the complexity, because it is how to say, the gateway which has the gateways, which has between the convolutions, transpose the classic convolution here, and especially the illustrations are very well done here. Then, third example here: well, there, this time we are going to use an uneven pad, same.
So now that is always in the nomenclature in the documentation of queyras. The uneven padd is my vocation to ensure that we have an output image that would be the same size. So the nap ride at 1: 1 and 1 padding at the same, without much surprise, we find an output image that is the size of the image of start. We are in the case of a situation, same situation as a classic concussion. So here to summarize, in fact, the important parameters, it is really the stride in convolution mode. Transpose these, the st rike, which will allow to expand, we will say, of lylat and our image, not to be confused with a dilation parameter of dilation rate, which concerns the dilation of de duquesne, of the projection of the kernel on the ground image.
Best, that is still something else, should not go into the details. It's still something else. Okay, but let's just focus on these two parameters, the stride and the schedule, and we will see that our days on, simple, its two simple parameters, we will manage to reconstitute, to do our decoder without any difficulty. So here we saw another encoder part and our our decoder part.
We can quickly look at what it looks like in terms of implementation. So that's what we will have in our first notebook- we are on the left, the summary of our encoder. That a right, you summarize our decoder. So there we see that we will therefore make an input layer which is 28 by 28 pixels, an airlock and our images class list, ic. That this entry cost, that's normal. We will do a first convolution, a first 2d conversion. Then there are convolutions of dd, convolution, one of the collections, and very interesting sounds, because they allow you to do exactly the same thing on curves. They are like the noise t, or to make convulsions on the courts, to make classification, to make still others during all that one wants. Here is the cd convolution, daudet, since one is in two dimensions of the cousinage. Therefore we make a first 2d convolution to halve the size of our images, so only by playing on the stage and fanning. So there we recover images of 14 pixels by 14 pixels. And then we made thirty two convolution planes. We make a second convolution, a second convolution layer, the same, and we recover even smaller images, set of, set by set, [Music], these pixels by this pixels, and with 64 shots. Like you, we're going to put all our aces flat. So that's from one hill to here. We put all that flat. So there we get a large vector which makes the 3300, 3136 pixels. We put a dance layer, behind by 16 pixels is a second layer dance of 10 pixels. And at the output of this model- there, in fact it is a model sound, at the output of this set, we are going to recover a vector which makes ten components, which is the size of our space awaits it. So we passed an image which make 784 the pixel, 28 by 28, and it happens to me like that, we do n't know yet it's not a correction- compressions whose, conservative, whose, if we want to say, if we can say, but we arrive like that, summarized our input image to a vector of ten components can be seen there. And then, then, in the other direction, we will do the opposite operation, we will react by saying marc and sampled, to be precise.
So we start from our, our entry cost, this time past, what comes out of trespass, expect ten components, one transforms into may, one e, second layer dances to reconstitute a vector of 3136 components. We subtract, form that in 64 set, plane per set. So the richest assassin, m, and then we redo. It is composed of convolution and transposed did 3 to go to two sets by 7, at 14 by 14 bis, from 14 by 14, we come back to 28 by 28, 32 and then we refine our last convulsions to arrive at 28, 28, this convention with a kernel 2: 1. Here then we have seen, or a little bit, the concepts and the main principles of our architecture.
Of all the encoder we are going to be interested. Essential part it is how to implement a hutu to the coders, then to the coders. It is still something relatively that for could very well joked at the bottom of very classic sequential without, without great difficulty. But we will quickly be limited and as soon as we want to try to do things a little more sophisticated. So it's a very good opportunity to look precisely at the aspects a little bit u more advanced in programming with queyras and in particular skiing what is called the functional mode, the functional part of queyras. So to have documentation and I take this opportunity to give some links.
So already very, very, very important things concerning queyras, it is that queyras is an integral part of time on flo. When you recover in this flow, you recover breastplate fully, no need to install time. This reflux does not install queyras. You recover your brain, you at the table, and who cares? And you have all races directly inside queyras and the layer. Tell me the high-level piece of you're getting out of it today, there's no need to add anything. So three links: the first is on the queyras site, so queyras jersey. Therefore most find a whole lot of documentation, the api, haye examples, a whole lot of things and a whole lot of information, and then you can also find information tions over a year that raze so in on the site- sorry to get out of it blurry so far- so there you can also find guides, examples of things like that and all the haye apps and all the documentation of the apps there was. Therefore, with the documentation of each of the layers, of each of the functionalities of queyras, dupuis de ton sang. Obviously. Also here is a small advertisement for a book, for the book, the latest book by françois chaulet. Translation of the book by françois chaulet, deep learning with python, which you said sbm, which is certainly nonsense. One of the best works today around everything that is said, planning, in any case, from introduction to advanced introduction, I would say beeper ning, and with queyras, of course, since, once again, françois jullien is the author of queyras. So here I am going to go back to the other side so as not to hide all that.
Here is an example of an implementation of. So that 's an example of being the first minister example that we had done, or an example- this is the first example- that we actually had- the first notebooks we had seen with two places and no mydata classification- this list and we had implemented with the sequential pia what we call the sequential piece. So in fact, we make a sequence like that, which we stack our layers, so we create a sequential model and then we add a first layer, a second layer of the third external layer. The first layer that we add is the layer to enter in our example here, and then the last layer that we add, bass and a dance layer of 10 neurons with a mac-like activation which makes it the result of our, of our, classified here. Then that 's very good. It's extremely simple, extremely concise. No, we can't do it simpler and more sympathetic. Simply, there was a limit. It's, for example, how to do when we want to have several inputs, or if we want to see several outputs, or if we are going to have models that are a little, a little more complex. We cannot really, with this very practical, unusual ice cream API, but to do simple things. Here is then the functional lapierre.
It will give something like that: then it's box, it's not, it's not more. We make another on our simple example. The implementation with tapia and functional is just as simple. Finally, once we understand how it works, which is actually very simple, each, each layer, each layer and each model also, finally, and each layer here in our example, will behave like a function, a function to which we are going give only the intense hour parameter and which will only give a tensor at the output. And therefore the simplest entry dancer is to see directly: here we see that we have a tensor to enter here which costs us, you therefore low, that they want geometry. We are going to say that we put there that 28 by 28, 28, it was the same. And then we are going to give, we are going to give this, this entry to our first layer. We are going to start by putting all that flat. So here is this input layer. We give it to our clapton layer. Here a lot of fun, has no rafa, has no primary parameters. I would say: just get the input. Okay, that we see, that. We see here, here is our hymn. Everything here is taken as input to our layer which shaves to bind the cap, the ton and then it will come out of this layer towards leaving a tensor that we have here which we call knicks. Here it is in relation to part simplicity, but if we are not going to put a different method each time and then in the next layer that we have after. So there it is, a dance layer of haydn. Many of you I hid from haydn a neuron with an activation read back seven dense layers. So configure, retrieves an entry, the entry is passed our x, that it is low, and the input but which was the output of the layer before, quite simply here, and then that we recover, like that again, an output that we will re-call x and then that we will reuse as input of the following layer, only to see the father and an output here, as to recover as input of our last layer and as output. So there, at the output of our model, we will recover a tensor which we call out foot and its bass and the exit dancer of our model.
Here is our model, in fact. It has an input, it has an output, okay, and we pass you an output, the dancers' inputs, like that, throughout our different layers, and then, when we go, create our model itself. So that 's what we have on the last line here when we do, when we go to numerous instances, because at this model point we're going to give the first entry point. So the entry dancer, who was our sister time, the input, our sister time output, which is our time, will be of all that we have in the end here and then there, there are no two. There is no witchcraft is that when we have gradually built up, in fact, different pieces of other, d 'other models. A graff was built behind and the craf is completely fine and I, the links of the graff, we will say, have been reduced, instantiated in the background. But when we will instantiate the model, like this and there- and we will- we will create the graph itself, jokingly, all the pieces that acquired, that are included between football and is the lot of everything. Here is therefore these relative syntax and is relatively simple.
There are two things to remember for good to find your way around. The first thing is that each layer, each meyer, will take a tensor as input. It will come out- a tensor does not in fact come out each, that each best is unbeatable in python to be precise, for that we don't care, but in any case, each the elsewhere is a function that takes a time, is in input and a tensor comes out in output, and when we are going to manufacture our model we give the time serves as input of the model, the dancer of output of the model and keira this does all the rest to rebuild. Behind all, the whole graph of our model. Here is therefore the syntax. Once we have understood how it is made, it is relatively simple, relatively clear and just as synthetic as that which we had in the sequential model here- then from that we will be able to make very interesting things, like, for example here: imagine that we have adata 7 composite, which is composed of image, texts and metadata.
Then text, for example, it could be a description, the images pass by little, images of posts on a social network and the methods to tobacco information. Then I pas de deux: geographical, for example, on the origin of this name, the young couples, black lication, locator applications, two of his passes, your qu 'we have attack methods like that on text and images, and if we want to process- it's somewhat composite- data like that, obviously for images we will need a rather productive network. For the text we will use, rather, a network is recurrent, and then for the tobacco methods, we will want to use a completely connected network. So, obviously, how we will manage when we want to make a classification on somewhat composite data like that, it will be my master corner. So it is therefore, for that, the only solution- well, it will be to use- we will not be able to make a descriptor which allows, which is compatible with care recurrent layer on a conventional layer or either a completely connected layer. So we will have to treat the three of us separately, types of data for each of our observations, and that we are going to do with what is called a model with several multi-inputs, a model with several inputs. So the pr? Incipe is the following is that we will in fact have three modules. So the modules can be models of layher successions, it is exactly the same. We have three modules, that they will make of three treatments and at the end of these three treatments we will take into account that to forgive the results of our three modules and then have the rest of our treatment behind. Then, in fact, we will have, as we have seen previously, a succession of layers which will take the first down with the text and then another, another input layer, which will be the image, and then our entryism move. We are metadata, and then we will see, like that, on these three modules, a succession of layers, who cares the output of one to make the input of the other, etc. And then at the output, output of these sets, or vertically, here, three times, we will have an output tensor which will be tightened 9 x, cnn and xd end in or near the gold saints.
There is a constraint per m2 geometry on se s 303 exit dancers. They have to be able to be con katyn and therefore they have to have identical geometry. And then we are going to use. So for this contest, this concatenation conquest is going to use a particular layer which has a terrace go cats and nike inc. Or of concatenation which takes as a parameter a list of sister times and which will allow, like that, to conch athenaeall and all being sister and bring out for a time, the one which will be able to be used, for example, in dense layers. Then here it is, so that it is very, very simple. We act as if we had three times a classic model. We recover the three outputs and we listen to it. We are stupid kateb- so as to recover a single tensor we learn to use after that. This is the multiple input.
We can also have a multiple output. So this is the example that the mandate, and this is the example that we will have later in our fourth notebook, our notebook at 4, one of our examples. So we have s input images, input image type objects, and then we will want to make one of the sound effects, of the sound effects on one side, but we also want to have a classification. How to do both? Then we could very well find a model etc. But here is how, how to do things, how to have a single model that is able to make both sound effects with an auto encoder and then classification with a cnn. We are going to do it in a very simple way, using exactly this mechanism. So we have it there. So if we take that again, it's a small example and we'll go over it in more detail in the notebook, but it's just to see that it's very simple. So we have an input which is, in fact we include an input image. Here we is going to have two models. An interior is going to have a model for unclamping is a model for classification. This is what we have here, so we have our own model which takes an image sled, and which brings out a tensor which we call to d are nurses, and. And then we therefore have our second model, which is called here: if it is not acquired, we will also give input ball. Yes, since it will rely on the same input for the currency image, input image too. And then he will bring out another dancer, also on his side, who will be class 4 here, who will be celebrating if our ideas with one or ten classes of an audi finally break, except max, who will take us out, tells us, tells us possible classification, and then, good, we create our final model with a model. Equal queyras for a model. Simply then always the same syntax, queyras for model. It takes the inputs, the outputs, simply, in input it can take a tensor, put tensor, and in output can take a nice tensor of dancers. In l 'case, here I have only one input which pushes my model, so it only takes a beam as input and output, not to take notes from the headphones at.
It takes the list of the list of outputs on its side. Here it is. Here the syntax is extremely synthetic, so we will see that obviously you have to define well, afterwards you have to define the auto- encoder model and then the two-class model to also do a little more, a little more things, but it nevertheless is as simple as that 's the syntax, seriously like that. Then, in the little things, in the little things to remember the first thing out already we can remember: we can use a model, we can absolutely use a model like as a layer in order to have- we can play lego, we can completely make up our complex model with sub-models is empty, and it is that they are made up of layher or sub-sub-models that my dream. What here is, in addition, to compose its way as we want and advise and build as those entirely our models in a completely modular way, is therefore very visible here. We will see that through through our different examples, come to the right and we will move forward. Then we can do even more complicated things.
So it will be ours notebook got five. Your second example. Third example, sorry, or there, we are going to have a double output. So we have input data and we are going to have a double output is simply to classify. We will do it in a slightly more advanced way. So that is a technique called lindsay paeschen, that is to say that when we want to do something, we will try to do it in a complementary way, in parallel. That is to say that we will use for our whistler case, two modules. As for the moselle patterns, when volutes, yves, as we want, or two sets of layers, when you motivate, as we want, so our cnn module, a donation, becomes you here a bubble. From here each of these two modules goes. Then it was confused- networks, classic tiff, but which will have modes of operation a little bit different. Shots of men is very different from, and notebooks, different convolutions etc. Then. So they will be sensitive to different aspects of the images. What? So? We can assume that module 1 will be more sensitive, if I caricature, which will be more capable for her, she will have more abilities to recognize the heroes, and one by one example. And then module 2, it may be better on other types of other types of numbers. So here we are going to use our two, 2, two modules like that, when you tiff him in parallel, the same image will be level 2. We're going to retrieve the results of both and then we're going to have to see, see a classic hours behind. Who's going to retrieve all that information there and then who's going to try to see which light in the worst case, and it's for to give a planning overall. That's that. That's so that these models two inception types are. Then there are models like a complete historical inception model that is available in queyras completely. But if this architectural anthem re, then it works for models like on the grounds that we can put in parallel, like that. But it's true for all. We could quite do like that, especially the previous examples that we have seen to see approaches like that a little bit concurrent for parallel or complementary rather than concurrent, complementary for to get a result.
So it's to be with networks completely in this. Recurring doesn't matter, there aren't two, there's no limit to the model. So here is how to implement that. Well, there it is the same. In sequential we could not implement it. So we will rely on the functional procedural model to be able to do that and we have something like that which is very simple at this level. So we're going now, you're going, in is still our auto encoder. Here we have an output, it's a healthy output: m2 comprising, corresponding to us, to us, two modules, and then we're going to conca. Hold the result here with you all good results will be called x. Here it is a low concatenation, 2 x scenarios of x and élaine deveault really concatenates the sister output times of a facing of our two modules so as to recover an athenaeconch tensor that we will not do.
What we then give to the rest of our model to running layers, that's it. It's extremely powerful and the syntax is extremely simple. Finally to implement. The fact of being able to use indistinctly, finally having to use models of year deadlines in a very simple like that, means that we can have an extraordinarily synthetic writing of our architectures. We will see that it is this architecture in our example and already are already starting to be a little bit complex and we will see that, on the one hand, the code is very visible and in addition, they extended synthetic. Here we are on something not very simple, high clear.
Here are some small particular capes, then some small specifics. Finally, n a few small, a small zoom, we will say on the compilation part and then on the learning part. We always use a companion point model and the models take advantage to make the compilation of the model is triggered and the learning. The only specificity is that it will obviously have to when we have, when we have a model with two outputs, for example, we will have to specify loss functions. This is what we have in our example here. So per year specifies that our little misery it will be. The lady would concentrate, drm is clean or anywhere, but the one we want as a loss function. Well, there, we do not specify that. A mother specifies 2. If we have two outputs, we will specify a loss function for each of our outputs, and so its bass and the parameter the bone. And instead of taking [Music], it misses the quotes. Instead of taking a, instead of taking a single one works only two functions: losses. Madail takes a tablea u with the list of loss functions, the list of function names lose foot, small niamey. And then we have a second parameter, which is the losc weitz, which allows, like that, to bridge the rich, each of the loss functions in france. What does it mean? What does it mean here? Understand, at 0.25 of the loss. 4th category, col 63. Then we will add 1 time the mse sassafras, our function of losses the bone, which will be used for our learning, for the fit. It is the same as if we are taken in multiple entries, multiple exits, and we will have to specify this information there and therefore not what we had classically is that we made a fit to give x and y there. Instead of giving x and y, we will give the list of x and the list des y, fantille with several inputs or several outputs there in the example. Here, for example, we have two output inputs, so we specify with strain alix, line 2, so this is a table, this is the table of inputs, and then then you place to have an exit from the, a kind of table of outputs work there, prime 2 for the stone near in the other parameters, classically some that we had here.
It is one of the syntaxes. After, rather to use with two lists rather than to use an array like that. We can use a dictionary. If we have, if we have named the differences and different parts of, maybe good, it will be necessary to refer to the documentation to be able- maybe a subtlety, but they go globally- if you understood that, you have understood 90% of the important things of the functional part and especially to be able to use the pia and functional. It is really on these 3 transparent, you have practically all of which, all the things already allowing you to start using. Then we will see through our examples. That's really how it works, like that. Thank you very much for watching this video.
The example- the first example we will see in the next video on art and man- of a skinned, leads me others, our turn to cooder and then and then effectively to succeed in passing an extremely raw image and that one can have to images of inebriation that one has two cents. It is not cheating, is really? It really works. It really works like that. Thank you very much and see you in our next edition.