Computer science is not a science, not about computers, and has commonalities with magic and geometry; it is about formalizing intuitions about processes and developing a way to talk precisely about how-to knowledge.
[Applause].
I'd like to welcome you to this course
on computer science. Actually, that's a
terrible way to start. Computer science
is a terrible name for this business.
First of all, it's not a science. It, uh,
might be engineering or it might be art.
Well, we'll actually see that computer
so-called science actually has a lot in
common with magic. You'll see that in
this course. So it's not a science. It's
also not really very much about
computers, and it's not about computers
in the same sense that that physics is
not really about particle accelerators
and biology is not really about
microscopes and petri dishes, and it's
not about computers in the same set
sense that geometry is not really about
using surveying instruments. In fact,
there's a lot of commonality between
computer science and geometry. Geometry,
first of all, is another subject with a
lousy name. The name comes from Gaia,
meaning the earth, and Metron, meaning to
measure. Geometry originally meant
measuring the earth were, or surveying,
and the reason for that was that
thousands of years ago, the Egyptian
priesthood
developed the rudiments of geometry in
order to figure out how to restore the
boundaries of fields that were destroyed
in the annual flooding of the Nile. To
the Egyptians who did that, geometry
really was the use of surveying
instruments. Now, the reason that we think
computer science is about computers is
pretty much the same reason that the
Egyptians thought geometry was about
surveying instruments. And that is when
some field is just getting started and
you don't really understand it very well.
It's very easy to confuse the essence of
what you're doing with the tools that
you use and indeed, on some absolute
scale of things, we probably know less
about the essence of computer science
than the ancient Egyptians really knew
about genomics.
Well, what's what I mean by the essence
of computer science? What I mean by the
essence of geometry? See, it's certainly
true that these Egyptians went off and
used surveying instruments, but when we
look back on them after a couple of
thousand years, we say, gee, what they were
doing, the important stuff they were
doing was to begin to formalize notions
about space and time, to start a way of
talking about mathematical truths.
Formally, that led to the axiomatic
method, that led to serve all of modern
mathematics, figuring out a way to talk
precisely about so-called declarative
knowledge, what is true. Well, similarly,
think in the future people will look
back and say, yes, those, those primitives
in the 20th century, we're fiddling
around with these gadgets called
computers. But really what they were
doing is starting to learn how to
formalize, formalize intuitions about
process, how to do things, starting to, to
develop a way to talk precisely about,
about how to knowledge, as opposed to
geometry that talks about what is true.
We give you an example of that
Take a look. Here is a piece of a piece
of mathematics, right, that says what a
square root is and if the square root of
the x is the number Y, such that Y
squared is equal to x and Y is greater
than zero. Now that's a fine piece of
mathematics. But just telling you what a
square root is doesn't really say
anything about about how you might go
out and find one, right? So let's contrast
that with a piece of imperative
knowledge, right, how you might go out and
find a square root. This, in fact, also
comes from Egypt, not, not a, not ancient,
ancient Egypt, this isn't an algorithm
due to here on earth. Alexandria called
how to find a square root by successive
averaging, and what it says is that in
Show more
Show less
[Applause].
I'd like to welcome you to this course
on computer science. Actually, that's a
terrible way to start. Computer science
is a terrible name for this business.
First of all, it's not a science. It, uh,
might be engineering or it might be art.
Well, we'll actually see that computer
so-called science actually has a lot in
common with magic. You'll see that in
this course. So it's not a science. It's
also not really very much about
computers, and it's not about computers
in the same sense that that physics is
not really about particle accelerators
and biology is not really about
microscopes and petri dishes, and it's
not about computers in the same set
sense that geometry is not really about
using surveying instruments. In fact,
there's a lot of commonality between
computer science and geometry. Geometry,
first of all, is another subject with a
lousy name. The name comes from Gaia,
meaning the earth, and Metron, meaning to
measure. Geometry originally meant
measuring the earth were, or surveying,
and the reason for that was that
thousands of years ago, the Egyptian
priesthood
developed the rudiments of geometry in
order to figure out how to restore the
boundaries of fields that were destroyed
in the annual flooding of the Nile. To
the Egyptians who did that, geometry
really was the use of surveying
instruments. Now, the reason that we think
computer science is about computers is
pretty much the same reason that the
Egyptians thought geometry was about
surveying instruments. And that is when
some field is just getting started and
you don't really understand it very well.
It's very easy to confuse the essence of
what you're doing with the tools that
you use and indeed, on some absolute
scale of things, we probably know less
about the essence of computer science
than the ancient Egyptians really knew
about genomics.
Well, what's what I mean by the essence
of computer science? What I mean by the
essence of geometry? See, it's certainly
true that these Egyptians went off and
used surveying instruments, but when we
look back on them after a couple of
thousand years, we say, gee, what they were
doing, the important stuff they were
doing was to begin to formalize notions
about space and time, to start a way of
talking about mathematical truths.
Formally, that led to the axiomatic
method, that led to serve all of modern
mathematics, figuring out a way to talk
precisely about so-called declarative
knowledge, what is true. Well, similarly,
think in the future people will look
back and say, yes, those, those primitives
in the 20th century, we're fiddling
around with these gadgets called
computers. But really what they were
doing is starting to learn how to
formalize, formalize intuitions about
process, how to do things, starting to, to
develop a way to talk precisely about,
about how to knowledge, as opposed to
geometry that talks about what is true.
We give you an example of that
Take a look. Here is a piece of a piece
of mathematics, right, that says what a
square root is and if the square root of
the x is the number Y, such that Y
squared is equal to x and Y is greater
than zero. Now that's a fine piece of
mathematics. But just telling you what a
square root is doesn't really say
anything about about how you might go
out and find one, right? So let's contrast
that with a piece of imperative
knowledge, right, how you might go out and
find a square root. This, in fact, also
comes from Egypt, not, not a, not ancient,
ancient Egypt, this isn't an algorithm
due to here on earth. Alexandria called
how to find a square root by successive
averaging, and what it says is that in
Iterative method for finding square roots by improving guesses through averaging.
order to find a square root, in order to
find a square root, you make a guess,
you improve that guess, and the way you
improve the guess is to average the
guess and x over the gas, and we'll talk
a little bit later about what, why. That's
a reasonable thing, and you keep
improving to guess until it's good
enough. But that's a method. That's how to
do something, as opposed to declarative
knowledge that says what you're looking
for,
right? That's a process.
Well, what's the process? In general, it's
kind of hard to say. You can think of it
as like a magical spirit, that sort of
listen the computer and does something,
and the thing that directs the process
Show more
Show less
order to find a square root, in order to
find a square root, you make a guess,
you improve that guess, and the way you
improve the guess is to average the
guess and x over the gas, and we'll talk
a little bit later about what, why. That's
a reasonable thing, and you keep
improving to guess until it's good
enough. But that's a method. That's how to
do something, as opposed to declarative
knowledge that says what you're looking
for,
right? That's a process.
Well, what's the process? In general, it's
kind of hard to say. You can think of it
as like a magical spirit, that sort of
listen the computer and does something,
and the thing that directs the process
Computer science is about formalizing rules and techniques to control the complexity of large software systems, which is different from dealing with complexity in physical systems.
is a pattern of rules called a procedure.
So procedures are the, are the spells if
you like, that control these magical
spirits. That are the processes. And well,
I guess you know everyone needs a
magical language and sorcerers. But real
sorcerer is used- ancient Akkadian or or
Sumerian or Babylonian or whatever. We're
gonna conjure up spirits in a magical
language called Lisp,
which is a language designed for talking
about, for casting the spells. That are
procedures to direct the prophesies. Now,
it's very easy to learn Lisp. In fact in
a few minutes I'm going to teach you
essentially all of Lisp and teach you
essentially all of the rules, and you
shouldn't find that, uh, that particularly
surprising. That's sort of like saying
it's very easy to learn the rules of
chess and indeed in a few minutes you
can tell somebody the rules of chess. But
of course that's very different from
saying you understand the implications
of those rules and how to use those
rules to become a masterful chess player.
Well, listen, it's the same way. We're
going to state the rules in a few
minutes and we'll be very easy to see.
But what's really hard is going to be
the implications of those rules, right,
how you exploit those rules to be a
master programmer. And the implications
of those rules are going to take us the
well, the whole rest of the subject and
of course, way beyond. Okay, so in computer
science we're in the business of
formalizing is sort of have to
imperative knowledge like how to do
stuff and the real issues of computer
science are of course not.
You know, it's telling people how to do
square is so that that was all it was,
there wouldn't be no big deal. The real
problems come when we try to build very,
very large system.
The computer programs that are that are
thousands of pages long, so wrong that
nobody can really hold them in their
heads all at once. And the only reason
that that's possible is because there
are techniques there, techniques for
controlling the complexity of these
large systems and these techniques for
component controlling, complexity or what
this course is really that? And in some
sense that's really what computer
science is about. Now that may seem like
a very strange thing to say because they-
after all, a lot of people besides
computer scientists- deal with
controlling complexity. Are they? A large
airliner is an extremely complex system,
and the aeronautical engineers who
design that, or you know, are dealing with
immense complexity. But there's a
difference between that kind of
complexity and what we deal with in
computer science, and that is that that
computer science and sometimes isn't
real. You see, when an engineer is
designing a physical system that's made
out of real parts, like, the engineer to
worry about that has to address problems
of tolerance and approximation and noise
in the system. So, for example, as an
electrical engineer I can go off and
easily build a one stage amplifier or a
two stage amplifier and I can imagine
cascading a lot of them to build a
million stage amplifier. But it's
ridiculous to build such a thing because
by the long before the million stage the
thermal noise in those components- way at
the beginning, is going to get amplified
and make the whole thing meaningless.
Computer science deals with idealized
components. We know as much as we want
about these little program and data
pieces that we're fitting things
together right. So there's, we don't have
to worry about tolerance. And that means
that in building a large program
there's not all that much difference
between what I can build and what I can
imagine, because the parts are these
abstract entities that I know as much as.
As much as I want, I know about them as
precisely as I like. So, as opposed to
other kinds of engineering where the
constraints on which you can build are
the constraints of physical systems, the
constraints of physics and noise and
approximation, the constraints imposed in
building large software systems are the
limitations of our own minds. So in that
Show more
Show less
is a pattern of rules called a procedure.
So procedures are the, are the spells if
you like, that control these magical
spirits. That are the processes. And well,
I guess you know everyone needs a
magical language and sorcerers. But real
sorcerer is used- ancient Akkadian or or
Sumerian or Babylonian or whatever. We're
gonna conjure up spirits in a magical
language called Lisp,
which is a language designed for talking
about, for casting the spells. That are
procedures to direct the prophesies. Now,
it's very easy to learn Lisp. In fact in
a few minutes I'm going to teach you
essentially all of Lisp and teach you
essentially all of the rules, and you
shouldn't find that, uh, that particularly
surprising. That's sort of like saying
it's very easy to learn the rules of
chess and indeed in a few minutes you
can tell somebody the rules of chess. But
of course that's very different from
saying you understand the implications
of those rules and how to use those
rules to become a masterful chess player.
Well, listen, it's the same way. We're
going to state the rules in a few
minutes and we'll be very easy to see.
But what's really hard is going to be
the implications of those rules, right,
how you exploit those rules to be a
master programmer. And the implications
of those rules are going to take us the
well, the whole rest of the subject and
of course, way beyond. Okay, so in computer
science we're in the business of
formalizing is sort of have to
imperative knowledge like how to do
stuff and the real issues of computer
science are of course not.
You know, it's telling people how to do
square is so that that was all it was,
there wouldn't be no big deal. The real
problems come when we try to build very,
very large system.
The computer programs that are that are
thousands of pages long, so wrong that
nobody can really hold them in their
heads all at once. And the only reason
that that's possible is because there
are techniques there, techniques for
controlling the complexity of these
large systems and these techniques for
component controlling, complexity or what
this course is really that? And in some
sense that's really what computer
science is about. Now that may seem like
a very strange thing to say because they-
after all, a lot of people besides
computer scientists- deal with
controlling complexity. Are they? A large
airliner is an extremely complex system,
and the aeronautical engineers who
design that, or you know, are dealing with
immense complexity. But there's a
difference between that kind of
complexity and what we deal with in
computer science, and that is that that
computer science and sometimes isn't
real. You see, when an engineer is
designing a physical system that's made
out of real parts, like, the engineer to
worry about that has to address problems
of tolerance and approximation and noise
in the system. So, for example, as an
electrical engineer I can go off and
easily build a one stage amplifier or a
two stage amplifier and I can imagine
cascading a lot of them to build a
million stage amplifier. But it's
ridiculous to build such a thing because
by the long before the million stage the
thermal noise in those components- way at
the beginning, is going to get amplified
and make the whole thing meaningless.
Computer science deals with idealized
components. We know as much as we want
about these little program and data
pieces that we're fitting things
together right. So there's, we don't have
to worry about tolerance. And that means
that in building a large program
there's not all that much difference
between what I can build and what I can
imagine, because the parts are these
abstract entities that I know as much as.
As much as I want, I know about them as
precisely as I like. So, as opposed to
other kinds of engineering where the
constraints on which you can build are
the constraints of physical systems, the
constraints of physics and noise and
approximation, the constraints imposed in
building large software systems are the
limitations of our own minds. So in that
Computer science uses techniques like blackbox abstraction to ignore constraints and build modular systems, allowing for the suppression of detail and expression of generality in solving problems.
sense computer science is like an
abstract form of engineering. It's the
kind of engineering where you ignore the
constraints that are imposed by reality.
Ok, well, what are, what are some of these
techniques? They're not special to
computer science. First technique which
is used in all of engineering is kind of
abstraction, called blackbox abstraction.
It takes something and, uh, they'll build
a box of doubt. Yet let's see, for example,
look at that square root method. I might
want to take that and build a box that
says to find the square root of x, and
that might be a whole complicated set of
rules and that might end up being a kind
of thing where I can put in, say, 36, and
say what's the square root of 36, and out
comes 6. And the important thing is that
I'd like to design that so that if
George comes along and we'd like to
compute the square root of a plus the
square root of B, you can take this thing
and use it as a, as a module, without
having to look inside and build
something that looks like this: I mean a
and B square root box and another square
root box and then something that adds
that would put out the answer. And you
can see just from the fact that I want
to do that is from George's point of
view. The internals of what's in here
should not be important. So, for instance,
shouldn't matter that when I wrote this
I said I want to find the square root of
x. I could have said the square root of Y
or the square root of ay or or anything
at all, but that's the fundamental notion
of putting something in a box using
blackbox abstraction to suppress detail.
And the reason for that is: you want to
go off and build your bigger boxes. Now
there's another reason for doing
blackbox abstraction, other than you want
to suppress detail, for building bigger
boxes. Sometimes
you want to say that your way of doing
something, your how to method, is an
instance of a more general thing, and
you'd like your language to be able to
express their generality. Let me show you
another example: sticking with square
roots. Let's go back and take another
look at that slide with the square root
algorithm on it. Remember what that says.
That says: in order to to do something, I
make a guess and I improve that guess
and I sort of keep improving that. Yes,
very nice, reduce. So there's a general
strategy of are looking for something,
and the way I find it is that I keep
improving it.
Now that's a particular case of another
kind of strategy for finding a fixed
point of something. See, a fixed point of
the function. A fixed point of the
function is something. This is the ru6
point of the function. F is a value Y,
such that that f of y equals y. And the
way I might do that: you
start with a guess and if I want
something that doesn't change when I
keep applying FSI, I'll keep applying up
over and over until that result doesn't
change very much. That's so. There's a
general strategy. And then, for example, to
compute the square root of x, I can try
and find a fixed point of the function
which takes Y to the average of x over Y,
Show more
Show less
sense computer science is like an
abstract form of engineering. It's the
kind of engineering where you ignore the
constraints that are imposed by reality.
Ok, well, what are, what are some of these
techniques? They're not special to
computer science. First technique which
is used in all of engineering is kind of
abstraction, called blackbox abstraction.
It takes something and, uh, they'll build
a box of doubt. Yet let's see, for example,
look at that square root method. I might
want to take that and build a box that
says to find the square root of x, and
that might be a whole complicated set of
rules and that might end up being a kind
of thing where I can put in, say, 36, and
say what's the square root of 36, and out
comes 6. And the important thing is that
I'd like to design that so that if
George comes along and we'd like to
compute the square root of a plus the
square root of B, you can take this thing
and use it as a, as a module, without
having to look inside and build
something that looks like this: I mean a
and B square root box and another square
root box and then something that adds
that would put out the answer. And you
can see just from the fact that I want
to do that is from George's point of
view. The internals of what's in here
should not be important. So, for instance,
shouldn't matter that when I wrote this
I said I want to find the square root of
x. I could have said the square root of Y
or the square root of ay or or anything
at all, but that's the fundamental notion
of putting something in a box using
blackbox abstraction to suppress detail.
And the reason for that is: you want to
go off and build your bigger boxes. Now
there's another reason for doing
blackbox abstraction, other than you want
to suppress detail, for building bigger
boxes. Sometimes
you want to say that your way of doing
something, your how to method, is an
instance of a more general thing, and
you'd like your language to be able to
express their generality. Let me show you
another example: sticking with square
roots. Let's go back and take another
look at that slide with the square root
algorithm on it. Remember what that says.
That says: in order to to do something, I
make a guess and I improve that guess
and I sort of keep improving that. Yes,
very nice, reduce. So there's a general
strategy of are looking for something,
and the way I find it is that I keep
improving it.
Now that's a particular case of another
kind of strategy for finding a fixed
point of something. See, a fixed point of
the function. A fixed point of the
function is something. This is the ru6
point of the function. F is a value Y,
such that that f of y equals y. And the
way I might do that: you
start with a guess and if I want
something that doesn't change when I
keep applying FSI, I'll keep applying up
over and over until that result doesn't
change very much. That's so. There's a
general strategy. And then, for example, to
compute the square root of x, I can try
and find a fixed point of the function
which takes Y to the average of x over Y,
The speaker wants to express the general strategy for finding six points and the imperative knowledge of how to do it using their language.
and the idea of that is that if I really
had y equal to the square root of x, then
y and x over Y would be the same value.
They'd both be the square root of x.
Right,
x over the square root of x is the
square root of x, and so the average if Y
were equal to the square root of x, then
the average wouldn't change right. So the
square root of x is a fixed point of
that particular function. Now what I'd
like to have, I'd like to express the
general strategy for finding six points.
So what I might imagine doing is to find,
is to be able to use my language to
define a box that says fixed point, just
like I could make a box that says square
root, and I'd like to be able to express
this in my language. All right, so I'd
like to express not only the imperative
how-to knowledge of a particular thing
like square root, but I'd like to be able
to express the imperative knowledge of
how to do a general thing like how to
find six points. And in fact, let's go
back and look at that slide again- she
not only is-
is this a piece of imperative knowledge-
how to find a fixed point, but over here
on the bottom there's another piece of
imperative knowledge, which is one way to
compute square is is to apply this
general fixed point method. So I'd like
to also be able to express that in
period of knowledge. What would that look
like? That would say, this fixed point box
is such that if I input to it the
function that takes Y to the average of
Y and X over Y, then what should come out
of that fixed point box is a method for
finding square is. So in these boxes
we're building, we're not only building
boxes that you input numbers and output
numbers. We can be building in boxes that,
Show more
Show less
and the idea of that is that if I really
had y equal to the square root of x, then
y and x over Y would be the same value.
They'd both be the square root of x.
Right,
x over the square root of x is the
square root of x, and so the average if Y
were equal to the square root of x, then
the average wouldn't change right. So the
square root of x is a fixed point of
that particular function. Now what I'd
like to have, I'd like to express the
general strategy for finding six points.
So what I might imagine doing is to find,
is to be able to use my language to
define a box that says fixed point, just
like I could make a box that says square
root, and I'd like to be able to express
this in my language. All right, so I'd
like to express not only the imperative
how-to knowledge of a particular thing
like square root, but I'd like to be able
to express the imperative knowledge of
how to do a general thing like how to
find six points. And in fact, let's go
back and look at that slide again- she
not only is-
is this a piece of imperative knowledge-
how to find a fixed point, but over here
on the bottom there's another piece of
imperative knowledge, which is one way to
compute square is is to apply this
general fixed point method. So I'd like
to also be able to express that in
period of knowledge. What would that look
like? That would say, this fixed point box
is such that if I input to it the
function that takes Y to the average of
Y and X over Y, then what should come out
of that fixed point box is a method for
finding square is. So in these boxes
we're building, we're not only building
boxes that you input numbers and output
numbers. We can be building in boxes that,
This course will cover blackbox abstraction, combining primitive procedures and data to create more complex procedures and compound data.
in effect, compute methods like finding
square roots, and my take is they're in
point inputs. Functions like Y goes to
the average of y and x over y, like we
can. You want to do that? See, the reason
this is a procedure will end up being a
procedure, as we'll see, whose value is
another procedure. The reason we want to
do that is because procedures are going
to be our ways of talking about
imperative knowledge, and the way to make
that very powerful is to be able to talk
about other kinds of knowledge. So here
is a procedure that mystech talks about,
another procedure by the general
strategy that itself talks about general
strategy. Okay, well, our first topic in
this course- there'll be three major
major topics- will be blackbox
abstraction. Let's look at that in a
little bit more detail. What we're going
to do is we will
we'll start out talking about how list
is built up at a primitive object, like
what is the language supply with us, and
we'll see that there are primitive
procedures and primitive data. Then we're
going to see how do you take those
primitives and combine them to make more
complicated things. That means of
combinations, and what we'll see is that
there ways of putting things together,
putting primitive procedures together to
make more complicated procedures, and
we'll see how to put primitive data
together to make compound data. Then
we'll say, well, having made those
compound things, how do you abstract them?
How do you put those black boxes around
Show more
Show less
in effect, compute methods like finding
square roots, and my take is they're in
point inputs. Functions like Y goes to
the average of y and x over y, like we
can. You want to do that? See, the reason
this is a procedure will end up being a
procedure, as we'll see, whose value is
another procedure. The reason we want to
do that is because procedures are going
to be our ways of talking about
imperative knowledge, and the way to make
that very powerful is to be able to talk
about other kinds of knowledge. So here
is a procedure that mystech talks about,
another procedure by the general
strategy that itself talks about general
strategy. Okay, well, our first topic in
this course- there'll be three major
major topics- will be blackbox
abstraction. Let's look at that in a
little bit more detail. What we're going
to do is we will
we'll start out talking about how list
is built up at a primitive object, like
what is the language supply with us, and
we'll see that there are primitive
procedures and primitive data. Then we're
going to see how do you take those
primitives and combine them to make more
complicated things. That means of
combinations, and what we'll see is that
there ways of putting things together,
putting primitive procedures together to
make more complicated procedures, and
we'll see how to put primitive data
together to make compound data. Then
we'll say, well, having made those
compound things, how do you abstract them?
How do you put those black boxes around
The TLDR is: "This lecture covers procedures, data abstraction, higher-order procedures, and the blurring line between data and procedures."
them so you can use them as components
in more complex things? And we'll see
that's done by defining procedures and a
technique for dealing with compound data
called data abstraction. And then what's
maybe the most important thing is going
from just the rule to add as an expert
work. How do you express common patterns
of doing things like saying: well, there's
a general method of fixed point and
square root is a particular case of that,
and we're going to use- I've already
hinted at it- something called
higher-order procedures, namely
procedures whose inputs and outputs are
themselves procedures. And then we'll
also see something very interesting.
We'll see as we go further and further
on and become more abstract, they'll be
very well, the line between what we
consider to be data and what we consider
to be procedures is going to blur at an
incredible rate. Well, that's a. That's our
first subject: blackbox abstraction. Let's
look at the second topic. He introduced
it like this, he suppose I. I want to
express the idea and remember we're
we're talking about ideas. I don't want
to express the idea that I can take
something
and multiply it by the sum of two other
things. So, for example, I might say: if I
add one entry and multiply that by 2, I
Show more
Show less
them so you can use them as components
in more complex things? And we'll see
that's done by defining procedures and a
technique for dealing with compound data
called data abstraction. And then what's
maybe the most important thing is going
from just the rule to add as an expert
work. How do you express common patterns
of doing things like saying: well, there's
a general method of fixed point and
square root is a particular case of that,
and we're going to use- I've already
hinted at it- something called
higher-order procedures, namely
procedures whose inputs and outputs are
themselves procedures. And then we'll
also see something very interesting.
We'll see as we go further and further
on and become more abstract, they'll be
very well, the line between what we
consider to be data and what we consider
to be procedures is going to blur at an
incredible rate. Well, that's a. That's our
first subject: blackbox abstraction. Let's
look at the second topic. He introduced
it like this, he suppose I. I want to
express the idea and remember we're
we're talking about ideas. I don't want
to express the idea that I can take
something
and multiply it by the sum of two other
things. So, for example, I might say: if I
add one entry and multiply that by 2, I
Linear combination allows adding and scaling of vectors, polynomials, or electrical signals.
get a. But I'm talking about the general
idea of what's called linear combination,
that you can add two things and multiply
them by something else. Very easy when I
think about it for numbers, but suppose I
I also want to use that same idea to
think about I could add two vectors, a 1
and a 2, and then scale them by some, say
after X, and get another vector. Or I
might say I want to think about a 1 and
a 2 as being polynomials and I might
want to add those two polynomials and
then multiply them by 2 to get a more
complicated one, or a 1 and H. You might
be electrical signals, and I might want
to think of that, summing those two
electrical signals and then amp, putting
the whole thing to an amplifier,
multiplying it by by some factor of 2 or
something. The idea is I want to think
about the general notion of that. Now, if
our language is going to be a good
language for expressing those kind of
general ideas, I really, really can do
Show more
Show less
get a. But I'm talking about the general
idea of what's called linear combination,
that you can add two things and multiply
them by something else. Very easy when I
think about it for numbers, but suppose I
I also want to use that same idea to
think about I could add two vectors, a 1
and a 2, and then scale them by some, say
after X, and get another vector. Or I
might say I want to think about a 1 and
a 2 as being polynomials and I might
want to add those two polynomials and
then multiply them by 2 to get a more
complicated one, or a 1 and H. You might
be electrical signals, and I might want
to think of that, summing those two
electrical signals and then amp, putting
the whole thing to an amplifier,
multiplying it by by some factor of 2 or
something. The idea is I want to think
about the general notion of that. Now, if
our language is going to be a good
language for expressing those kind of
general ideas, I really, really can do
The lecture discusses the problem of generic operations and the importance of conventional interfaces in controlling complexity in programming.
that. So I'd like to be able to say I'm
going to multiply by X the sum of a 1
and a 2 and I'd like that to express the
general idea of all different kinds of
things that a 1 and a 2 could be. Now, if
you think about that, there's a problem
because after all, the actual primitive
operations that go on in the machine are
obviously going to be different if I'm
adding two numbers than finally adding
two polynomials, or if I'm adding the
representation of two electrical signals
or waveforms. Somewhere there has to be
the knowledge of the kinds of various
things that you can add and the ways of
adding them,
construct such a system. The question is:
where do I put that knowledge? How do I
think about the different kinds of
choices I have? And if tomorrow, George
comes up with a new kind of object that
might be added and multiplied, how do I
add George's new object to the system
without screwing up everything that was
already here? But well, that's going to be
the the second big topic: the way of
controlling that kind of complexity. And
the way you do that is by establishing
conventional interfaces, agreed-upon ways
of plugging things together. Just like in
electrical engineering, people have
standard impedances for connectors and
then you know, if you build something
with one of those standard impedances,
you can plug it together with something
else like that's going to be our second
large topic: conventional interfaces. What
we're going to see is when. First we're
going to talk about the problem of
generic operations, which is the one I
alluded to, things like plus, that that
have to work with all different kinds of
data.
We talked about generic operations. Then
we're going to talk about really
large-scale structures. How do you put
together very large programs that model
the kinds of complex systems in the real
world that you'd like to model? And what
we're going to see is that there are two
very important metaphors for putting
together such systems. One is called
object-oriented programming, where you
sort of think of your system as a kind
of society full of little things that
interact by sending information between
them. And then the second one is
operations on aggregates, called streams,
where you think of a large system put
together, kind of like a signal
processing engineer puts together a
large electrical system. That's going to
Show more
Show less
that. So I'd like to be able to say I'm
going to multiply by X the sum of a 1
and a 2 and I'd like that to express the
general idea of all different kinds of
things that a 1 and a 2 could be. Now, if
you think about that, there's a problem
because after all, the actual primitive
operations that go on in the machine are
obviously going to be different if I'm
adding two numbers than finally adding
two polynomials, or if I'm adding the
representation of two electrical signals
or waveforms. Somewhere there has to be
the knowledge of the kinds of various
things that you can add and the ways of
adding them,
construct such a system. The question is:
where do I put that knowledge? How do I
think about the different kinds of
choices I have? And if tomorrow, George
comes up with a new kind of object that
might be added and multiplied, how do I
add George's new object to the system
without screwing up everything that was
already here? But well, that's going to be
the the second big topic: the way of
controlling that kind of complexity. And
the way you do that is by establishing
conventional interfaces, agreed-upon ways
of plugging things together. Just like in
electrical engineering, people have
standard impedances for connectors and
then you know, if you build something
with one of those standard impedances,
you can plug it together with something
else like that's going to be our second
large topic: conventional interfaces. What
we're going to see is when. First we're
going to talk about the problem of
generic operations, which is the one I
alluded to, things like plus, that that
have to work with all different kinds of
data.
We talked about generic operations. Then
we're going to talk about really
large-scale structures. How do you put
together very large programs that model
the kinds of complex systems in the real
world that you'd like to model? And what
we're going to see is that there are two
very important metaphors for putting
together such systems. One is called
object-oriented programming, where you
sort of think of your system as a kind
of society full of little things that
interact by sending information between
them. And then the second one is
operations on aggregates, called streams,
where you think of a large system put
together, kind of like a signal
processing engineer puts together a
large electrical system. That's going to
Creating new languages helps control complexity and highlight system aspects.
be our second topic now the third thing
we're going to come to. The third basic
technique for controlling complexity is
making new languages. Sometimes, when
you're sort of overwhelmed by the
complexity of a design, the way that you
control that complexity is to pick a new
design language, and the purpose of the
new design language will be to highlight
different aspects of the system. It'll
suppress some kinds of details and
emphasize other kinds of detail. This is
going to be the sort of the most magical
part of the course. We're going to start
out by actually looking at the
technology for building new computer
languages and will be the first thing
we're going to do is actually build in
Lisp a problem we're going to express in
Show more
Show less
be our second topic now the third thing
we're going to come to. The third basic
technique for controlling complexity is
making new languages. Sometimes, when
you're sort of overwhelmed by the
complexity of a design, the way that you
control that complexity is to pick a new
design language, and the purpose of the
new design language will be to highlight
different aspects of the system. It'll
suppress some kinds of details and
emphasize other kinds of detail. This is
going to be the sort of the most magical
part of the course. We're going to start
out by actually looking at the
technology for building new computer
languages and will be the first thing
we're going to do is actually build in
Lisp a problem we're going to express in
The process of interpreting lists involves applying eval and reducing expressions.
list the process of interpreting list
itself, and that's going to be a very
sort of self circular thing. There's a
little mystical symbol that have to do
with that will see itself. The process of
interpreting lists is sort of a, a giant
wheel of two processes: apply an eval
which sort of constantly reduce
expressions to each other. Then we're
gonna see all sorts of other magical
things. Here's another, another magical
system symbol. This is kind of the sort
of the why operator, which is in some
sense the expression of infinity inside
our
Sidra language. We'll take a look at that.
In any case, this section of the course
is called metalinguistic abstraction-
talking abstract
by talking about how you construct new
languages. As I said, we're going to start
out by looking at the process of
Show more
Show less
list the process of interpreting list
itself, and that's going to be a very
sort of self circular thing. There's a
little mystical symbol that have to do
with that will see itself. The process of
interpreting lists is sort of a, a giant
wheel of two processes: apply an eval
which sort of constantly reduce
expressions to each other. Then we're
gonna see all sorts of other magical
things. Here's another, another magical
system symbol. This is kind of the sort
of the why operator, which is in some
sense the expression of infinity inside
our
Sidra language. We'll take a look at that.
In any case, this section of the course
is called metalinguistic abstraction-
talking abstract
by talking about how you construct new
languages. As I said, we're going to start
out by looking at the process of
Learn interpretation, evaluation, and list building using a general technology.
interpretation. We're going to look at
this, apply eval loop and build lists.
Then, just to show you that this is very
general, we're going to use exactly the
same technology to build a very
different kind of language, so called
logic programming language, where you
don't really talk about procedures at
all that have inputs and outputs. What
you do is talk about relations between
things. And then, finally, we're going to
talk about how you implement these
things very concretely on the very
Show more
Show less
interpretation. We're going to look at
this, apply eval loop and build lists.
Then, just to show you that this is very
general, we're going to use exactly the
same technology to build a very
different kind of language, so called
logic programming language, where you
don't really talk about procedures at
all that have inputs and outputs. What
you do is talk about relations between
things. And then, finally, we're going to
talk about how you implement these
things very concretely on the very
Introduction to Lisp interpreter and hardware in course outline.
simplest kind of machines, even though
we'll see something like this, which is:
this is a picture of the of the chip,
which is the Lisp interpreter that we
will be talking about then in hardware,
okay. Well, there's, there's an outline of
the course. Three big topics: black out
box abstraction, conventional interfaces,
middle linguistic abstraction.
Now, let's take a break now, and then
we'll get started.
[Music],
[Applause].
Show more
Show less
simplest kind of machines, even though
we'll see something like this, which is:
this is a picture of the of the chip,
which is the Lisp interpreter that we
will be talking about then in hardware,
okay. Well, there's, there's an outline of
the course. Three big topics: black out
box abstraction, conventional interfaces,
middle linguistic abstraction.
Now, let's take a break now, and then
we'll get started.
[Music],
[Applause].
Well, let's, let's actually start in
learning list now. Actually, we'll start
out by learning something much more
important, maybe the very most important
thing in this course, which is not list
in particular, of course, but rather a
Show more
Show less
Well, let's, let's actually start in
learning list now. Actually, we'll start
out by learning something much more
important, maybe the very most important
thing in this course, which is not list
in particular, of course, but rather a
The TLDR is: The speaker discusses the general framework for thinking about languages and how to analyze their primitive elements, means of combination, and means of abstraction. They use Lisp as an example to demonstrate these concepts.
general framework for thinking about
languages they already alluded to. When
somebody tells you they're going to show
you a language, what you should say is
all right. What I'd like you to tell me
is: what are the? What are the primitive
elements? What is the language come with?
Then, what are the ways you put those
together? What are the means of
combination? What are the things that
allow you to take these primitive
elements and build bigger things out of
them? What are the ways of putting things
together? And then, what are the means of
abstraction? How do we take those
complicated things and draw those boxes
around them? How do we name them so that
we can now use them as if they were
primitive elements in making still more
complex things, and so on and so on and
so on. So when someone says to you, gee, I
have a great new computer language, you
don't say: how many characters does it
take to invert a matrix? That it's
irrelevant, right. What you say is how, if
the language did not come with matrices
built-in or with something else, then how
could I then build that thing? What are
the means of combination which would
allow me to do that? And then, what are
the needs of the abstraction which allow
me then to use? Those are the elements in
making more complicated things. Yet. Well,
we're going to see that list has some
primitive data and some primitive
procedures. In fact, let's, let's really
start, and here's a piece of primitive
data and Lisp let's say:
right, number three. Actually, from being
very pedantic, that's not the number
three, that's some symbol that represents
no Plato's concept of the number three,
right. And here's another. Here's some
more primitive data in Lisp, right, it's
seventeen, point four, actually some
representation of seventeen point four,
and here's another one, five.
Here's another primitive object that's
built in Lisp addition, actually, use the
same kind of pedantic. This is a name for
the primitive method of adding things,
just like this is a name for Plato's
number three,
this is a name for Plato's concept of
how you, how you, add things. But sure,
those are some primitive, the elements. I
can put them together. I can say: gee,
watch the sum of three and seventeen,
point four and five. The way I do that is
to say, let's apply the Sun operator to
these three numbers and I should get
what: eight, seventeen, twenty five, point
four. I should be able to ask Lisp what
the value of this is and we'll return:
twenty five, point four.
Let's introduce some names. This thing
that I typed,
it's called a combination, and a
combination consistent, general of
applying an operator. So this is an
operator to some operands. These are the
operands
and of course I can make more complex
things. The reason I can get complexity
out of this is because the operands
themselves in general can be
combinations. So, for instance, I could say
what is the sum of 3 and the product of
five and six and eight and two, and I
should get, let's say, 30, 40, 43. So this
should tell me that that's 43 forming
combinations. Well, is that basic means
the combination that we'll be looking at?
And then, well, you see some, some syntax
here. Lisp uses what's called prefix
notation, which means that the operator,
that the operator, is written to the left
of the operands. It's just a convention
and notice, it's fully parenthesized and
the parenthesis make it completely
unambiguous. So by looking at this I can
see that there's the operator and there
1, 2, 3, 4 operands, right, and I can see
that the second operand here is itself
some combination that has one Operator
and two operands. Parentheses in list
they're a little bit or are very unlike
parentheses and conventional mathematics.
Mathematics we sort of use them to mean
grouping and it sort of doesn't hurt if
sometimes you leave out parentheses, if
people understand that that's a group
and in general it doesn't hurt if you
put in extra parentheses. That, because
that maybe makes the grouping more
distinct. List is not like that: enlist.
You cannot leave out parentheses and you
Show more
Show less
general framework for thinking about
languages they already alluded to. When
somebody tells you they're going to show
you a language, what you should say is
all right. What I'd like you to tell me
is: what are the? What are the primitive
elements? What is the language come with?
Then, what are the ways you put those
together? What are the means of
combination? What are the things that
allow you to take these primitive
elements and build bigger things out of
them? What are the ways of putting things
together? And then, what are the means of
abstraction? How do we take those
complicated things and draw those boxes
around them? How do we name them so that
we can now use them as if they were
primitive elements in making still more
complex things, and so on and so on and
so on. So when someone says to you, gee, I
have a great new computer language, you
don't say: how many characters does it
take to invert a matrix? That it's
irrelevant, right. What you say is how, if
the language did not come with matrices
built-in or with something else, then how
could I then build that thing? What are
the means of combination which would
allow me to do that? And then, what are
the needs of the abstraction which allow
me then to use? Those are the elements in
making more complicated things. Yet. Well,
we're going to see that list has some
primitive data and some primitive
procedures. In fact, let's, let's really
start, and here's a piece of primitive
data and Lisp let's say:
right, number three. Actually, from being
very pedantic, that's not the number
three, that's some symbol that represents
no Plato's concept of the number three,
right. And here's another. Here's some
more primitive data in Lisp, right, it's
seventeen, point four, actually some
representation of seventeen point four,
and here's another one, five.
Here's another primitive object that's
built in Lisp addition, actually, use the
same kind of pedantic. This is a name for
the primitive method of adding things,
just like this is a name for Plato's
number three,
this is a name for Plato's concept of
how you, how you, add things. But sure,
those are some primitive, the elements. I
can put them together. I can say: gee,
watch the sum of three and seventeen,
point four and five. The way I do that is
to say, let's apply the Sun operator to
these three numbers and I should get
what: eight, seventeen, twenty five, point
four. I should be able to ask Lisp what
the value of this is and we'll return:
twenty five, point four.
Let's introduce some names. This thing
that I typed,
it's called a combination, and a
combination consistent, general of
applying an operator. So this is an
operator to some operands. These are the
operands
and of course I can make more complex
things. The reason I can get complexity
out of this is because the operands
themselves in general can be
combinations. So, for instance, I could say
what is the sum of 3 and the product of
five and six and eight and two, and I
should get, let's say, 30, 40, 43. So this
should tell me that that's 43 forming
combinations. Well, is that basic means
the combination that we'll be looking at?
And then, well, you see some, some syntax
here. Lisp uses what's called prefix
notation, which means that the operator,
that the operator, is written to the left
of the operands. It's just a convention
and notice, it's fully parenthesized and
the parenthesis make it completely
unambiguous. So by looking at this I can
see that there's the operator and there
1, 2, 3, 4 operands, right, and I can see
that the second operand here is itself
some combination that has one Operator
and two operands. Parentheses in list
they're a little bit or are very unlike
parentheses and conventional mathematics.
Mathematics we sort of use them to mean
grouping and it sort of doesn't hurt if
sometimes you leave out parentheses, if
people understand that that's a group
and in general it doesn't hurt if you
put in extra parentheses. That, because
that maybe makes the grouping more
distinct. List is not like that: enlist.
You cannot leave out parentheses and you
Parentheses are used to group operands and operators in a precise manner, forming a tree-like structure.
cannot put in extra parentheses, right,
because putting in parentheses always
mean exactly and precisely. This is a
combination which has meaning apply an
operator to operands. And if I lift this
out,
let those parentheses out, it would mean
something else. In fact, the way to think
about this- is really what I'm doing when
I write something like this- is writing a
tree. So this combination is a tree that
has a plus and then a 3, and then there's
something else and an 8 and a 2, and then
there's something else. Here is itself
little subtree that has a star and a
five and a six, and the way to think of
Show more
Show less
cannot put in extra parentheses, right,
because putting in parentheses always
mean exactly and precisely. This is a
combination which has meaning apply an
operator to operands. And if I lift this
out,
let those parentheses out, it would mean
something else. In fact, the way to think
about this- is really what I'm doing when
I write something like this- is writing a
tree. So this combination is a tree that
has a plus and then a 3, and then there's
something else and an 8 and a 2, and then
there's something else. Here is itself
little subtree that has a star and a
five and a six, and the way to think of
Lisp is a programming language that uses parentheses to represent code and can perform calculations and evaluations.
that is really what's going on. Or we're
writing these trees and parentheses are
just a way to write this two-dimensional
structure as a linear character string,
because at least one whisper started and
people had teletypes or punch cards or
whatever. This was more convenient. They
couldn't. Maybe if lists started today we
would. The syntax of Lisp would look like
that. Well, let's, let's look at what that
actually looks like on the computer.
Right here I have a lisp interaction
setup. There's an editor and on the top
I'm going to type some values and ask
Lisp what they are. So, for instance, I can
say to Lisp what's the value of that
symbol? That's 3, and I asked to evaluate
it. And there you see, list is returned on
the bottom and said: oh yeah, that's 3. Or
I can say what's the sum of 3 & 4 & 8,
what's that combination? And ask list to
evaluate it,
and that's 15. Well, I can type in
something more complicated. I can say
what's the sum of the product of three
and the sum of seven and nineteen and a
half, and you'll notice here that list
has something built in that helped me
keep track of all these parentheses.
Watch as I type the next closed
parenthesis which is going to close the
combination, starting with the star, the
opening. One will flash there. I'll rub
Show more
Show less
that is really what's going on. Or we're
writing these trees and parentheses are
just a way to write this two-dimensional
structure as a linear character string,
because at least one whisper started and
people had teletypes or punch cards or
whatever. This was more convenient. They
couldn't. Maybe if lists started today we
would. The syntax of Lisp would look like
that. Well, let's, let's look at what that
actually looks like on the computer.
Right here I have a lisp interaction
setup. There's an editor and on the top
I'm going to type some values and ask
Lisp what they are. So, for instance, I can
say to Lisp what's the value of that
symbol? That's 3, and I asked to evaluate
it. And there you see, list is returned on
the bottom and said: oh yeah, that's 3. Or
I can say what's the sum of 3 & 4 & 8,
what's that combination? And ask list to
evaluate it,
and that's 15. Well, I can type in
something more complicated. I can say
what's the sum of the product of three
and the sum of seven and nineteen and a
half, and you'll notice here that list
has something built in that helped me
keep track of all these parentheses.
Watch as I type the next closed
parenthesis which is going to close the
combination, starting with the star, the
opening. One will flash there. I'll rub
Using parentheses and indentation in Lisp helps keep track of complex combinations, and the "define" function allows for abstraction and reusability of code.
this out and do it again, type closed and
you see that closed as the plus closed.
Again. That quizzes the star. Right now
I'm back to the sum and maybe I'm going
to add that all to, for that closes the
plus. Now I have a complete combination
and I can ask this for the value of that?
That kind of paren balancing is
something that's that's built into a lot
of Lisp systems to help you keep track,
because it is kind of hard just by hand
doing all these parentheses. There's
another, there's another kind of
convention for keeping track of
parentheses. Let me write another
complicated combination. Let's take the
sum of the product of 3 & 5 and add that
to something. And now what I'm going to
do is I'm going to indent so that the
operands are written vertically: what's
the sum of that and the product of 47?
And let's say the product of 47 with the
difference of 20 and 6.8. That means
subtract 6.8 from 20. You see the
parentheses close, close the -, close the
store. And now let's get another operator.
You see the list editor here. It's
indenting to the right position
automatically. Help me keep track. Hey, do
that again. I'll close that last
parenthesis again. Is he? It balances the
plus
Right now I can say: what's the value of
that? All right, so, uh, those two things-
indenting to the right level, which is
called pretty printing, and flashing
parentheses- are two things that a lot of
systems have have built in to help you
keep track, and you should learn how to
use them, okay, well, those are the
primitives. There's a means of
combination. Now let's go up to the means
of abstraction. Right, I'd like to be able
to take the idea that I do some
combination like this and abstract it
and give it a simple name. So I can use
that as an element. And I do that in Lisp
with define. So I could say, for example:
define a to be the product of five and
five. And now I could say so, say, for
example, two lists: what is the product of
a and a? This should be 25 and there
should be six, twenty five. And then,
crucial thing, I can now use a here. I've
used it in the combination, but I could
use that in other more complicated
things that I need in turn. So I could
say: define B to be the sum of, will say a
and the product of five and a. You close
Show more
Show less
this out and do it again, type closed and
you see that closed as the plus closed.
Again. That quizzes the star. Right now
I'm back to the sum and maybe I'm going
to add that all to, for that closes the
plus. Now I have a complete combination
and I can ask this for the value of that?
That kind of paren balancing is
something that's that's built into a lot
of Lisp systems to help you keep track,
because it is kind of hard just by hand
doing all these parentheses. There's
another, there's another kind of
convention for keeping track of
parentheses. Let me write another
complicated combination. Let's take the
sum of the product of 3 & 5 and add that
to something. And now what I'm going to
do is I'm going to indent so that the
operands are written vertically: what's
the sum of that and the product of 47?
And let's say the product of 47 with the
difference of 20 and 6.8. That means
subtract 6.8 from 20. You see the
parentheses close, close the -, close the
store. And now let's get another operator.
You see the list editor here. It's
indenting to the right position
automatically. Help me keep track. Hey, do
that again. I'll close that last
parenthesis again. Is he? It balances the
plus
Right now I can say: what's the value of
that? All right, so, uh, those two things-
indenting to the right level, which is
called pretty printing, and flashing
parentheses- are two things that a lot of
systems have have built in to help you
keep track, and you should learn how to
use them, okay, well, those are the
primitives. There's a means of
combination. Now let's go up to the means
of abstraction. Right, I'd like to be able
to take the idea that I do some
combination like this and abstract it
and give it a simple name. So I can use
that as an element. And I do that in Lisp
with define. So I could say, for example:
define a to be the product of five and
five. And now I could say so, say, for
example, two lists: what is the product of
a and a? This should be 25 and there
should be six, twenty five. And then,
crucial thing, I can now use a here. I've
used it in the combination, but I could
use that in other more complicated
things that I need in turn. So I could
say: define B to be the sum of, will say a
and the product of five and a. You close
Lisp can define variables and perform mathematical operations with them.
the plus. Let's take a look at that on
the computer, see how that looks.
It's all. I'll just type what I wrote on
the board. I can say: define
a to be the product of five and five.
I'll tell that to lift. And notice what
Lisp responded there with: with an A in
the bottom. In general, when you type in a
definition that lifts, it responds with
the knee, the symbol being defined, and I
could say: the list, what is the product
of a and a? It says that's 625. I can
define B to be the sum of a and the
product of five and a close. A friend
closes the store close, the press close.
The define- this says, okay, be there on
the bottom and now I can tell you, list,
what's the value of B. And I can say
something more complicated like: what's
the sum of a and the quotient of B and
five. That slash is divided, another
Show more
Show less
the plus. Let's take a look at that on
the computer, see how that looks.
It's all. I'll just type what I wrote on
the board. I can say: define
a to be the product of five and five.
I'll tell that to lift. And notice what
Lisp responded there with: with an A in
the bottom. In general, when you type in a
definition that lifts, it responds with
the knee, the symbol being defined, and I
could say: the list, what is the product
of a and a? It says that's 625. I can
define B to be the sum of a and the
product of five and a close. A friend
closes the store close, the press close.
The define- this says, okay, be there on
the bottom and now I can tell you, list,
what's the value of B. And I can say
something more complicated like: what's
the sum of a and the quotient of B and
five. That slash is divided, another
Primitive operator in Lisp allows for simple mathematical operations like squaring a number.
primitive operator. I've divided you by
far: the entity to a. This is okay, that's
55, all right, so there's what it looks
like. There's the basic means of defining
some something. It's the simplest kind of
naming, but it's not really very powerful.
See what I'd really like to name. We're
talking about general methods. I'd like
to name, or the general idea that, for
example, I could multiply five by five or
six by six. Oh,
thousand and one by a thousand and one a
thousand, 1.7 by a thousand and 1.7, right,
I'd like to be able to name the general
idea of multiplying something by itself.
We know what that is. That's called
squaring. The way I can do that- and Lisp-
is, I can say: you're fine, just square
something X, multiply X by itself, and
then, having having done that, I could say
so, list, for example, what's the square of
10, and this will say a hundred. Yeah,
Show more
Show less
primitive operator. I've divided you by
far: the entity to a. This is okay, that's
55, all right, so there's what it looks
like. There's the basic means of defining
some something. It's the simplest kind of
naming, but it's not really very powerful.
See what I'd really like to name. We're
talking about general methods. I'd like
to name, or the general idea that, for
example, I could multiply five by five or
six by six. Oh,
thousand and one by a thousand and one a
thousand, 1.7 by a thousand and 1.7, right,
I'd like to be able to name the general
idea of multiplying something by itself.
We know what that is. That's called
squaring. The way I can do that- and Lisp-
is, I can say: you're fine, just square
something X, multiply X by itself, and
then, having having done that, I could say
so, list, for example, what's the square of
10, and this will say a hundred. Yeah,
Squaring something means multiplying it by itself.
let's actually look at that a little
more closely. Yeah, right, there's the
definition of square. To square something,
right, multiply it by itself. All right,
you see this X here. Right, that X is kind
of a pronoun, which is the something that
I'm going to square, and then what I do
with it is: I multiply X by, multiply it
by itself. Okay, right, so there's a
notation for a for defining a procedure.
Actually, this is a little bit confusing
because sort of how I might use square
and I say square of X or square of 10,
but it's not making it very clear that
I'm actually naming something. So let me
write this definition in another way
that makes a little bit more clear that
I'm naming something. I'll say: define
Square to be lambda of X times X, X.
Here I'm meaning something square, but
just like over, here I'm naming something
Show more
Show less
let's actually look at that a little
more closely. Yeah, right, there's the
definition of square. To square something,
right, multiply it by itself. All right,
you see this X here. Right, that X is kind
of a pronoun, which is the something that
I'm going to square, and then what I do
with it is: I multiply X by, multiply it
by itself. Okay, right, so there's a
notation for a for defining a procedure.
Actually, this is a little bit confusing
because sort of how I might use square
and I say square of X or square of 10,
but it's not making it very clear that
I'm actually naming something. So let me
write this definition in another way
that makes a little bit more clear that
I'm naming something. I'll say: define
Square to be lambda of X times X, X.
Here I'm meaning something square, but
just like over, here I'm naming something
a. The thing that I'm naming square here
I named the thing I named a was the
value of this combination. Here the thing
that I'm naming square is this thing
that begins with lambda, and lambda is
lisps way of saying: make up procedure.
Let's look at that more closely on the
slide. The way I read that definition is
to say: I define Square to be make a
procedure. That's what the lambda is. Make
a procedure with an argument named X, and
Show more
Show less
a. The thing that I'm naming square here
I named the thing I named a was the
value of this combination. Here the thing
that I'm naming square is this thing
that begins with lambda, and lambda is
lisps way of saying: make up procedure.
Let's look at that more closely on the
slide. The way I read that definition is
to say: I define Square to be make a
procedure. That's what the lambda is. Make
a procedure with an argument named X, and
what it does is return the result of
multiplying X by itself. Now, in general,
we're going to be using- we're going to
be using this top form of defining this
because it's a little bit more
convenient, but don't lose sight of the
Show more
Show less
what it does is return the result of
multiplying X by itself. Now, in general,
we're going to be using- we're going to
be using this top form of defining this
because it's a little bit more
convenient, but don't lose sight of the
Syntactic sugar is a more convenient way of writing code, but it ultimately represents the same underlying functionality.
fact that it's really this: in fact, as
far as the Lisp interpreter is concerned,
there's no difference between typing
this to it and typing this to it, and
there's a word for that. We're sort of
syntactic sugar
What syntactic sugar means? It's having
somewhat more convenient surface forms
for typing something. So this is just
really syntactic sugar for this
underlying weak thing with the lambda.
And the reason you should remember that
is: don't forget that when I write
something like this I'm really naming
something. I'm naming something square,
and the something that I'm naming square
is a procedure that's getting
constructed. Ok, well, let's, let's look at
that on the computer too. So I'll come in.
I'll say: define square of X, give you
times X, X,
and I can
you know, I'll tell this fat to square.
You have named something square. And now,
having done that, I can ask list for
what's the square of a thousand and one?
Or, in general, I could say what's the
square of the sum of five and seven?
All right, square twelves, 144. Or I can
use square itself as an element in some
combination. I can say what's the sum of
the square of 3 and the square of 4?
Hey, 1916 is 25. Or I can use square as an
element in some much more complicated
thing. I can say what's the? What's the
square of the square, of the square of
Show more
Show less
fact that it's really this: in fact, as
far as the Lisp interpreter is concerned,
there's no difference between typing
this to it and typing this to it, and
there's a word for that. We're sort of
syntactic sugar
What syntactic sugar means? It's having
somewhat more convenient surface forms
for typing something. So this is just
really syntactic sugar for this
underlying weak thing with the lambda.
And the reason you should remember that
is: don't forget that when I write
something like this I'm really naming
something. I'm naming something square,
and the something that I'm naming square
is a procedure that's getting
constructed. Ok, well, let's, let's look at
that on the computer too. So I'll come in.
I'll say: define square of X, give you
times X, X,
and I can
you know, I'll tell this fat to square.
You have named something square. And now,
having done that, I can ask list for
what's the square of a thousand and one?
Or, in general, I could say what's the
square of the sum of five and seven?
All right, square twelves, 144. Or I can
use square itself as an element in some
combination. I can say what's the sum of
the square of 3 and the square of 4?
Hey, 1916 is 25. Or I can use square as an
element in some much more complicated
thing. I can say what's the? What's the
square of the square, of the square of
Lisp allows for the definition and use of procedures, regardless of whether they are built-in or user-defined.
1,001?
There's the square of the square of the
square of 1,001. Or I can say: the list,
what is square itself, what's the value
of that? And Lisp returns some
conventional way of telling me that
that's a procedure, does compound
procedure, Square and what the value of
square is. This procedure and the thing
with the Stars and the brackets are just
lisps conventional way of describing
that. Let's look at two more examples of
defining this right here there are two
more procedures. I can define the average
of x and y to be the sum of x and y / -
we're having had average and mean square.
Having had average in square, I can use
that to talk about the mean square of
something which is the average of the
square of X and the square of Y. So, for
example, having done that, I could say
what's the mean square of 2 + 3? And I
should get the average of 4 + 9, which is
6 and 1/2. The key thing here is that
having defined square, I can use it as if
it were primitive, right? So if we look
here on the slide I look at mean square,
write, the person defining mean square
doesn't have to know at this point
whether Square was something built into
the language or whether it was a
procedure that was defined. And that's
key thing in Lisp: that you not make
Show more
Show less
1,001?
There's the square of the square of the
square of 1,001. Or I can say: the list,
what is square itself, what's the value
of that? And Lisp returns some
conventional way of telling me that
that's a procedure, does compound
procedure, Square and what the value of
square is. This procedure and the thing
with the Stars and the brackets are just
lisps conventional way of describing
that. Let's look at two more examples of
defining this right here there are two
more procedures. I can define the average
of x and y to be the sum of x and y / -
we're having had average and mean square.
Having had average in square, I can use
that to talk about the mean square of
something which is the average of the
square of X and the square of Y. So, for
example, having done that, I could say
what's the mean square of 2 + 3? And I
should get the average of 4 + 9, which is
6 and 1/2. The key thing here is that
having defined square, I can use it as if
it were primitive, right? So if we look
here on the slide I look at mean square,
write, the person defining mean square
doesn't have to know at this point
whether Square was something built into
the language or whether it was a
procedure that was defined. And that's
key thing in Lisp: that you not make
Unnecessary distinctions between language primitives and built-in constructs are eliminated for seamless use.
arbitrary distinctions between things
that happen to be primitive in the
language and things that happen to be
built in person. You think that shouldn't
even have to know. So the things you
construct get used with all the power
and flexibility as if they were
primitives in fact can drive that home
by
looking on the computer one more time. We
Show more
Show less
arbitrary distinctions between things
that happen to be primitive in the
language and things that happen to be
built in person. You think that shouldn't
even have to know. So the things you
construct get used with all the power
and flexibility as if they were
primitives in fact can drive that home
by
looking on the computer one more time. We
talked about plus and in fact if I come
here on the computer screen and say what
is the value of plus, notice what Lisp
types out on the bottom. There typed that
compound procedure plus. Because in this
system it turns out that the addition
operator is itself a compound procedure
and if I didn't just type that in, you'd
never know that and it wouldn't make any
difference anyway. We don't care, it's
below the level of the abstraction that
we're dealing with. So the key thing is:
you cannot tell, should not be able to
Show more
Show less
talked about plus and in fact if I come
here on the computer screen and say what
is the value of plus, notice what Lisp
types out on the bottom. There typed that
compound procedure plus. Because in this
system it turns out that the addition
operator is itself a compound procedure
and if I didn't just type that in, you'd
never know that and it wouldn't make any
difference anyway. We don't care, it's
below the level of the abstraction that
we're dealing with. So the key thing is:
you cannot tell, should not be able to
Built-in things are not wrapped in abstraction, compound things are.
tell, in general, the difference between
things that are built-in and things that
are compound. Why is that? Because the
things that are compound have an
abstraction wrapper wrapped around them.
Okay, we've seen almost all the elements
of Lisp. Now there's only one more we
have to look at and that is how to make
a case analysis. Let me show you what I
mean. We might want to think about the,
the mathematical definition of the
absolute value functions. I might say: the
absolute value of X is the function
which has the property that it's
negative of the X for X less than 0, it's
0 for X equal to 0 and it's X for X
greater than 0. And let's pass a way of
Show more
Show less
tell, in general, the difference between
things that are built-in and things that
are compound. Why is that? Because the
things that are compound have an
abstraction wrapper wrapped around them.
Okay, we've seen almost all the elements
of Lisp. Now there's only one more we
have to look at and that is how to make
a case analysis. Let me show you what I
mean. We might want to think about the,
the mathematical definition of the
absolute value functions. I might say: the
absolute value of X is the function
which has the property that it's
negative of the X for X less than 0, it's
0 for X equal to 0 and it's X for X
greater than 0. And let's pass a way of
making case analyses. Let me define for
you absolute value. They define: the
absolute value of X is conditional, right?
This means case analysis, cond. Okay, if X
is less than 0, the answer is negate X,
right? What I've written here is a clause.
This is a. This whole thing is a
conditional clause and it has two parts.
Show more
Show less
making case analyses. Let me define for
you absolute value. They define: the
absolute value of X is conditional, right?
This means case analysis, cond. Okay, if X
is less than 0, the answer is negate X,
right? What I've written here is a clause.
This is a. This whole thing is a
conditional clause and it has two parts.
Lisp has a primitive procedure to test if something is true or false and perform actions accordingly.
This part here is a is a predicate or a
condition. That's a condition, and the
condition is expressed by something
called a predicate, and a predicate and
Lisp is some sort of thing that returns
either true or false. And you see, Lisp
has a primitive procedure, less then, that
tests whether something is true or false.
And the other part of a clause is an
action or a thing to do in the case
where that's true. And here what I'm
doing is negating X, the negation
operator,
well, the minus sign, and Lisp is a little
bit funny- if there are more than if
there are two or more arguments. If
there's two arguments and subtract the
second one from the first we saw that if
there's one argument, it negates it. All.
Right, so this corresponds to that. And
then there's another Cronk laws. It says:
in the case where X is equal to 0, the
answer is 0, and the case where X is
greater than 0, the answer is X. Close
that Clause, close the cond,
close the definition. There's a
definition of absolute value and you see,
Show more
Show less
This part here is a is a predicate or a
condition. That's a condition, and the
condition is expressed by something
called a predicate, and a predicate and
Lisp is some sort of thing that returns
either true or false. And you see, Lisp
has a primitive procedure, less then, that
tests whether something is true or false.
And the other part of a clause is an
action or a thing to do in the case
where that's true. And here what I'm
doing is negating X, the negation
operator,
well, the minus sign, and Lisp is a little
bit funny- if there are more than if
there are two or more arguments. If
there's two arguments and subtract the
second one from the first we saw that if
there's one argument, it negates it. All.
Right, so this corresponds to that. And
then there's another Cronk laws. It says:
in the case where X is equal to 0, the
answer is 0, and the case where X is
greater than 0, the answer is X. Close
that Clause, close the cond,
close the definition. There's a
definition of absolute value and you see,
This is a case analysis that compares two different ways of defining absolute value and explains their equivalence.
it's a case analysis that looks very
much like the case analysis you use in
mathematics. Ok, there's a somewhat
different way of writing a restricted
case analysis. Often you have a case
analysis where you only have one case,
where you test something and then,
depending on whether it's true or false,
you do something. And here's another
definition of the absolute value, which
looks almost the same, which says: if X is
less than 0, the result is negate X,
otherwise the answer is X. And we'll be
using if a lot. But again, the thing to
remember is that this form of absolute
value that you're looking at here
and then this one over here that I wrote
on the board are essentially the same,
and if and Condor, or whichever way you
like it. You can think of con, distinct
tactic sugar for if, or you can think of
if as syntactic sugar for con, and it
doesn't make any difference. Person
implementing a list system will pick one
and implement the other in terms of that,
and it doesn't matter which one you pick.
Okay, why don't we break now and then
take some questions? How come sometimes
when I write to, fine,
I put an open paren here and say define,
open paren something or other, and
sometimes, when I write this, I don't put
an open paren? Okay, the answer is this
particular form of define where you say:
define some expression. Is this very
special thing for defining procedures?
But again, what it really means is I'm
defining the symbol square to be there.
So way you should think about it is what
to find us, is you right to find? And the
second thing you write is the symbol
here, no open paren. The symbol you're
defining and what you're defining it to
be, that's like here and like here. That's
the sort of the basic way you use define.
And then there's this special syntactic
trick which allows you to define
procedures that look like this. So the
difference is it's whether or not you're
defining a procedure.
[Music].
[Applause]
All right, well, believe it or not, you
actually now know enough Lisp to write
essentially any numerical procedure that
you'd write in a language like Fortran
or basic or whatever, essentially any
other language. You probably think that's
a that's not believable, right? Because
you know that these languages have
things like four statements, and and do
until a while or something. But uh, we
don't really need any of that fact, we're
not gonna use any of that in this course.
Let me. Let me show you why. Again, looking
back at that square root, let's go back
to this square root algorithm of Heron
of Alexandria. Remember what that said. It
said: to find an approximation to the
square root of x, you make a guess, you
improve that guess by averaging the
guess and x over the guess. You keep
improving that until the guess is good
enough. And already alluded to the idea.
The idea is that if the initial guess
that she took, if that initial guess was
actually equal to the square root of x,
then G here would be equal to x over G.
So if you hit the square root, averaging
them wouldn't change it. If the G that
you picked was larger than the square
root of x, then x over G will be smaller
than the square root of x, so that when
you average G and x over G, you get
something in between. All right. So if you
pick a G that's that's too small, your
answer will be too large. If you pick a G
that's too large, if your G is larger
than the square root of x and x over G
will be smaller than the square root of
Show more
Show less
it's a case analysis that looks very
much like the case analysis you use in
mathematics. Ok, there's a somewhat
different way of writing a restricted
case analysis. Often you have a case
analysis where you only have one case,
where you test something and then,
depending on whether it's true or false,
you do something. And here's another
definition of the absolute value, which
looks almost the same, which says: if X is
less than 0, the result is negate X,
otherwise the answer is X. And we'll be
using if a lot. But again, the thing to
remember is that this form of absolute
value that you're looking at here
and then this one over here that I wrote
on the board are essentially the same,
and if and Condor, or whichever way you
like it. You can think of con, distinct
tactic sugar for if, or you can think of
if as syntactic sugar for con, and it
doesn't make any difference. Person
implementing a list system will pick one
and implement the other in terms of that,
and it doesn't matter which one you pick.
Okay, why don't we break now and then
take some questions? How come sometimes
when I write to, fine,
I put an open paren here and say define,
open paren something or other, and
sometimes, when I write this, I don't put
an open paren? Okay, the answer is this
particular form of define where you say:
define some expression. Is this very
special thing for defining procedures?
But again, what it really means is I'm
defining the symbol square to be there.
So way you should think about it is what
to find us, is you right to find? And the
second thing you write is the symbol
here, no open paren. The symbol you're
defining and what you're defining it to
be, that's like here and like here. That's
the sort of the basic way you use define.
And then there's this special syntactic
trick which allows you to define
procedures that look like this. So the
difference is it's whether or not you're
defining a procedure.
[Music].
[Applause]
All right, well, believe it or not, you
actually now know enough Lisp to write
essentially any numerical procedure that
you'd write in a language like Fortran
or basic or whatever, essentially any
other language. You probably think that's
a that's not believable, right? Because
you know that these languages have
things like four statements, and and do
until a while or something. But uh, we
don't really need any of that fact, we're
not gonna use any of that in this course.
Let me. Let me show you why. Again, looking
back at that square root, let's go back
to this square root algorithm of Heron
of Alexandria. Remember what that said. It
said: to find an approximation to the
square root of x, you make a guess, you
improve that guess by averaging the
guess and x over the guess. You keep
improving that until the guess is good
enough. And already alluded to the idea.
The idea is that if the initial guess
that she took, if that initial guess was
actually equal to the square root of x,
then G here would be equal to x over G.
So if you hit the square root, averaging
them wouldn't change it. If the G that
you picked was larger than the square
root of x, then x over G will be smaller
than the square root of x, so that when
you average G and x over G, you get
something in between. All right. So if you
pick a G that's that's too small, your
answer will be too large. If you pick a G
that's too large, if your G is larger
than the square root of x and x over G
will be smaller than the square root of
Averaging numbers gets you closer to the square root of x.
x. So averaging always gives you
something in between. And then it's not
quite trivial but it's. It's possible to
show that in fact, if G misses the square
root of x by a little bit, the average of
G and x over G will actually keep
getting closer to the square root of x.
So if you keep doing this enough, you'll
eventually get as close as you want. And
then there's another fact: that you can
always start out this process by using 1
as an initial guess
and always converge to the square root
of the axe. And so that's this method of
success of the averaging due to Heron of
Alexandria. Let's- uh, let's write it and
Lisp. Well, the central idea is: what does
it mean to try a guess for the square
root of x? Let's write that.
So we'll say: fine, to try a guess for the
Show more
Show less
x. So averaging always gives you
something in between. And then it's not
quite trivial but it's. It's possible to
show that in fact, if G misses the square
root of x by a little bit, the average of
G and x over G will actually keep
getting closer to the square root of x.
So if you keep doing this enough, you'll
eventually get as close as you want. And
then there's another fact: that you can
always start out this process by using 1
as an initial guess
and always converge to the square root
of the axe. And so that's this method of
success of the averaging due to Heron of
Alexandria. Let's- uh, let's write it and
Lisp. Well, the central idea is: what does
it mean to try a guess for the square
root of x? Let's write that.
So we'll say: fine, to try a guess for the
To compute the square root of x, we try a guess and improve it using an averaging algorithm.
square root of x. What do we do? We'll say:
if the guess is good enough, the guess is
good enough to be a guess for the square
root of x, then as an answer, we'll take
the guess. Otherwise we will try the
improved guess. We'll improve that guess
for the square root of x and we'll try
that as a guess for the square root of x.
Close the try, close the if, close the
define. So that's how we try a guess. And
then the next part of the process said:
in order to compute square roots,
we'll say: define. To compute the square
root of the X, we will try one as a guess
for the square root of x. Well, we have to
define a couple more things. We have to
say: how is it guess good enough and how
do we improve a guess? So let's look at
that, the algorithm to improve the guess.
Right, to improve a guess for the square
root of x, we average. That was the
algorithm. We average to guess with the
Show more
Show less
square root of x. What do we do? We'll say:
if the guess is good enough, the guess is
good enough to be a guess for the square
root of x, then as an answer, we'll take
the guess. Otherwise we will try the
improved guess. We'll improve that guess
for the square root of x and we'll try
that as a guess for the square root of x.
Close the try, close the if, close the
define. So that's how we try a guess. And
then the next part of the process said:
in order to compute square roots,
we'll say: define. To compute the square
root of the X, we will try one as a guess
for the square root of x. Well, we have to
define a couple more things. We have to
say: how is it guess good enough and how
do we improve a guess? So let's look at
that, the algorithm to improve the guess.
Right, to improve a guess for the square
root of x, we average. That was the
algorithm. We average to guess with the
A procedure for computing the square root of a number by guessing, checking if it's close enough, and improving the guess.
quotient of dividing X by the guess.
All right, that's how we improve a guess.
And to tell whether he guess is good
enough. Well, we have to decide something.
Let's, this is supposed to be a guess for
the square root of x. So one possible
thing you can do is say: when you take
that guess and square it, do you get
something very close to X, for example? So
what? That one way to say? That is to say
I square the guess, subtract X from that
and see if the absolute value of that
whole thing is less than some small
number, which depends on my purposes.
Right, okay, so there's a, there's a
complete procedure for how to compute
the square root of x. Let's look at the
structure of that a little bit. All right,
have the whole thing. I have the notion
of how to, how to compute a square root.
That's some kind of module, right? That's
some kind of black box. It's defined in
terms of right. It's defined in terms of
how to try a guess for the square root
of x. Try is to find in terms of well,
telling whether something is good enough
and telling how to improve something. So,
good enough, try us to find in terms of
good enough and improve, and let's see
what else. I still in Wolfe, you'll go
down this tree. Good enough was defined
in terms of absolute value and square
and improve was defined in terms of
something called averaging and then some
other primitive operators. So try- square
roots defined in terms of try, tries to
find in terms of good enough and improve,
Show more
Show less
quotient of dividing X by the guess.
All right, that's how we improve a guess.
And to tell whether he guess is good
enough. Well, we have to decide something.
Let's, this is supposed to be a guess for
the square root of x. So one possible
thing you can do is say: when you take
that guess and square it, do you get
something very close to X, for example? So
what? That one way to say? That is to say
I square the guess, subtract X from that
and see if the absolute value of that
whole thing is less than some small
number, which depends on my purposes.
Right, okay, so there's a, there's a
complete procedure for how to compute
the square root of x. Let's look at the
structure of that a little bit. All right,
have the whole thing. I have the notion
of how to, how to compute a square root.
That's some kind of module, right? That's
some kind of black box. It's defined in
terms of right. It's defined in terms of
how to try a guess for the square root
of x. Try is to find in terms of well,
telling whether something is good enough
and telling how to improve something. So,
good enough, try us to find in terms of
good enough and improve, and let's see
what else. I still in Wolfe, you'll go
down this tree. Good enough was defined
in terms of absolute value and square
and improve was defined in terms of
something called averaging and then some
other primitive operators. So try- square
roots defined in terms of try, tries to
find in terms of good enough and improve,
Defining things in terms of themselves can make sense.
but also try itself. So try is also
defined in terms of how to try itself.
Well, that may give you, give you some
problem right here. Your high school
geometry teacher probably told you that
it's it's naughty to try and define
things in terms of themselves because it
doesn't make sense. But that's false,
all right. Sometimes it makes perfect
sense to defy the things in terms of
themselves, and this is a case and we can
look at that, we can write down what this
means.
You'd say, suppose I ask Lisp what the
square root of two is. What's the square
root of two mean? Well, that means I try
one as a guess for this square root of
Show more
Show less
but also try itself. So try is also
defined in terms of how to try itself.
Well, that may give you, give you some
problem right here. Your high school
geometry teacher probably told you that
it's it's naughty to try and define
things in terms of themselves because it
doesn't make sense. But that's false,
all right. Sometimes it makes perfect
sense to defy the things in terms of
themselves, and this is a case and we can
look at that, we can write down what this
means.
You'd say, suppose I ask Lisp what the
square root of two is. What's the square
root of two mean? Well, that means I try
one as a guess for this square root of