Discussion:
[stack] What does "concatenative" actually mean?
spir
2009-03-02 20:19:32 UTC
Permalink
Hello,

New to the list.
A long time ago, I was fascinated by Forth's ability to let one write programs/words by simply combining lower level words/programs. This was wonderfully illustrated by Logo's visual version of the feature.
I intuitively understand "concatenative" as meaning "constructive" in the sense of "brick programming". This is what I have always looked for, probably. Still, I do not really get why some languages may be so qualified; why most languages do not have this property.
I just discover that Forth (which I onlyused very few a long time ago) is supposed to be a concatenative language. I read the article on wikipedia, some more related texts and a few threads on this list. I do not know Joy (will read a bit while waiting for your replies ;-). Below my present ideas on the topic. I wish you would comment and criticize so that maybe... eurêka! (I am both an amateur and a non-english speaker: please be tolerant with my weird phrasing.)

I will use a notation where, for Forth like languages:
* Digits refer to literal data.
* An uppercase letters refers to a word/defined thing.
* A lowercase letter refers to any item inside a word definition, that may be a literal or another word



-1- referential transparency ================

In stack based langugages,
:W 1
actually means "push 1 (whatever it actually is) on top of the stack".
In a "real" OO language (based on messaged passing, such as usually done in prototypes languages like Io), a reference to a data slot actually calls a data accessor.
Is this analog to "pushing on the stack" ?

:X <do something with 2 values on top the stack>
:W 1 2 X

I intuitively think that this expression is analog to the following:

method X(a1,a2)
<do something with a1 & a2>
method W
x := 1 # literal value: call accessor
y := 2 # idem
X(x,y)

This leads me to wonder whether concatenation is related to referential transparency. Let us imagine a program which parse tree looks like this:
X
x1
x2
x3
Y
y1
y2
y3
Z
z1
z2
z3
A
a
X
Z
B
b
Y
Z

I have the impression that the language is concatenative iff we can safely translate A & B to:
A
a
x1
x2
x3
z1
z2
z3
B
b
y1
y2
y3
z1
z2
z3

Id est replace words with their definitions. This means that a word's definition only relies on the definitions on the words it references. So what makes most languages "unsafe" in this aspect? What kind of element in a definition breaks the scheme? My hypethesis is: definitions may use outer things like environment, state, variables... In the tree above, this translates to the fact that several occurrences of, say, z1 do not perform exactly the same action. For instance, it may not push the same value a stack: then, if z2 uses this value, it will not have the same result.

What lets me uneasy with this point of view is that "concatenative" is not supposed to be a synonym of "purely functional". Or what? Are purely functional languages necessarily, logically, concatenative? But maybe there are other ways to achieve this property, meaning that concatenative is a superset of functional?
Or am I totally wrong?
Anyway, forth is not qualified as functional. Also, stack based languages can well use data external to a word definition (outside the stack). Are they then still concatenative?



-2- the self expression hypothesis ================

Another unusual feature of Forth is its ability to compile itself. This is related I guess to the property that new words are equally treated as builtin words. Io has a similar feature and, as I understand it, a consequence is that it has no keyword in the usual sense of the term.
To allow such a property, these languages' grammars map directly to a parse tree (like Lisp, too) without any structural transformation. This is related, if I understand it correctly, to the fact that the grammar itself does not hold semantics. Or not the same way as for other languages; rather it is basically formal.
Still, there must some primitives here or there, "under the hood", else such a language would really be a pure form and a program would execute without *doing* anything ;-) These primitives are not expressed in the language itself. There are operational primitives (such as IO), but also some minimal "meta-linguistic" ones such as in Forth:

* A number token means "push this number (value) on the stack"
* ':...;' means "parse and store this definition"
* A word token means "recall this word's definiton"
[Expressed with my own words that may not be accurate.]

Analog operational and meta-linguistic primitives can be found in Io or other languages. In Io's case, and I guess in Forth too, even those things can be redefined. [E.g. in Io one can give a new definition to '(...)' which default is close to the one in Lisp.]

Now, my question is: is there any relation between such self-defining properties and the notion of "concatenativity"?



-3- human expressiveness ======================

I wonder about the practical usefulness of the concatenative propertie(s). The ability of building a program by pure composition is great. But it seems to lead to great modelizing restrictions, too. I will illustrate my question with a simple example.

Lets take the "mystery number" game. [Guess a number between eg 1 & 100. The "game master" answers "more" or "less" until found.] As I see it:

* There are 2 actors.
* They communicate: the player "guesses", the master "answers".
* The master holds a "secret" data item; the player has current "lower" and "upper" bounds.

This example really well maps to OO semantics, sure, I didn't choose it randomly;-) But I really think, too, that it illustrates a kind of natural way to express most of non-scientific models. (I.e. models that are rather a system than a "problem" mapping to a (possibly tree-like) process.)

How would a model of this game expressed in a concatenative language look like? (I really ask: if ever some of you want to answer with source code...)

I guess that in a functional language it would lead to conceptual distortion. Slight, manageable distortion, because the game is so simple. What I mean is that such langugages are often unable to allow a direct mapping from a model to source code expression. In other words: for me, a program should look like (be analog to) what it pretends to express.
This statement may only reflect a personal bias. Prototype based languges are, imo, the best metaphor / paradigm as of know, to allow direct, "natural", expression (In the OO world, most class-based languages yield on the contrary incredible and highly abstract complication -- and extreme dependencies!).

Is there a way to accord concatenation and basic OO? Or are these approaches -- as I suspect -- definitely incompatible?


Denis
------
la vita e estrany
John Nowak
2009-03-02 21:23:15 UTC
Permalink
Post by spir
I intuitively understand "concatenative" as meaning "constructive"
in the sense of "brick programming". This is what I have always
looked for, probably. Still, I do not really get why some languages
may be so qualified; why most languages do not have this property.
The Wikipedia article was completely changed a few days ago and now
gives a (mostly) good definition of what qualifies; I'd start there:
http://en.wikipedia.org/wiki/Concatenative_programming_language
Post by spir
In stack based langugages,
:W 1
actually means "push 1 (whatever it actually is) on top of the stack".
In a "real" OO language (based on messaged passing, such as usually
done in prototypes languages like Io), a reference to a data slot
actually calls a data accessor.
Is this analog to "pushing on the stack" ?
I'm not sure what you mean exactly, but I think the answer is probably
no. I don't think trying to understand concatenative languages in
terms of message passing is likely to yield good results. The problem
is that, unlike in Io, you don't really have a way of denoting the
receiver of a given message. Especially in the higher order languages
like Joy that have operations like 'dip', 'bi', etc, it's not clear
how a message passing semantics would work. (That's not to say you
couldn't make it work of course.)

The way to think about concatenative languages is in terms of function
composition. Each term in the language represents a function that
takes a stack as an argument and returns a new stack as a result. This
is most clearly seen in Joy where stacks are first class and
inexpensive to pass around freely. I'd try not to think about it in
the Forth sort of mindset where there's a single stack (or pair of
stacks) that all operations mutate.
Post by spir
This leads me to wonder whether concatenation is related to
referential transparency.
In Joy, most functions are referentially transparent. Depending on how
you look at it, you might even say that all functions are
referentially transparent because the compositional nature of the code
makes it possible to thread a single invisible state throughout the
entire program. Some degree of hand-waving is required however.
Post by spir
I have the impression that the language is concatenative iff we can
... Id est replace words with their definitions. This means that a
word's definition only relies on the definitions on the words it
references.
Correct.
Post by spir
So what makes most languages "unsafe" in this aspect? What kind of
definitions may use outer things like environment, state, variables...
Correct; things like local variables make a language non-
concatenative. Factor offers locals but uses a compile-time transform
to convert definitions that use them to purely concatenative code.
Post by spir
What lets me uneasy with this point of view is that "concatenative"
is not supposed to be a synonym of "purely functional". Or what?
It depends on the language. For example, all data is persistent in
Joy; only IO will cause a visible side effect. In a language like
Factor, much data is not persistent.
Post by spir
Are purely functional languages necessarily, logically, concatenative?
Not at all. Haskell is purely functional (i.e. there are *no* side
effects), but not concatenative.
Post by spir
Another unusual feature of Forth is its ability to compile itself.
This is related I guess to the property that new words are equally
treated as builtin words. Io has a similar feature and, as I
understand it, a consequence is that it has no keyword in the usual
sense of the term.
Many languages have the property that new words are treated equally to
built-in words. What's unique about Io is that any function has access
to the message sent to it before any expressions associated with that
message are reduced. This is similar to the situation in Joy and XY
where functions can manipulate the functions passed to them (called
"quotations") as if they were lists. Factor offers a macro system that
lets you do the same sort of things you do in Forth.
Post by spir
Still, there must some primitives here or there, "under the hood",
else such a language would really be a pure form and a program would
execute without *doing* anything
You may want to look at this page; it shows how concatenative
languages can be turing complete with a very minimal combinator
selection:
http://tunes.org/~iepos/joy.html
Post by spir
Now, my question is: is there any relation between such self-
defining properties and the notion of "concatenativity"?
The simple grammar of concatenative languages makes it easy to do the
sort of things you're talking about. It's not a property unique to
concatenative languages however.
Post by spir
I wonder about the practical usefulness of the concatenative
propertie(s).
They're useful. Tons of code is written in Forth. Factor has shown
tremendous progress in a very short period of time. If you're
interested in how such languages can be productive, I'd encourage you
to download Factor and get to work.

Factor offers an OO system which may make you feel more at home,
although it's based on multiple dispatch (I think) rather than message
passing; the former makes more sense for a concatenative language. I
don't know the system well enough to give you an example to your
problem, although I'm sure asking on the #concatenative channel on
Freenode would get you results; quite a few Factor programmers hang
out there.
Post by spir
Is there a way to accord concatenation and basic OO? Or are these
approaches -- as I suspect -- definitely incompatible?
They're not incompatible. Many Forths have objects, Factor has
objects, and I believe there was talk of adding a prototype-based
object system to Cat at one point as well. If you want to reimplement
Io in Factor, you'd be more than able to. I'd encourage you to try the
existing system first though and see what you think.

- John
Daniel Ehrenberg
2009-03-02 21:42:30 UTC
Permalink
Post by John Nowak
They're useful. Tons of code is written in Forth. Factor has shown
tremendous progress in a very short period of time. If you're
interested in how such languages can be productive, I'd encourage you
to download Factor and get to work.
Factor offers an OO system which may make you feel more at home,
although it's based on multiple dispatch (I think) rather than message
passing; the former makes more sense for a concatenative language. I
don't know the system well enough to give you an example to your
problem, although I'm sure asking on the #concatenative channel on
Freenode would get you results; quite a few Factor programmers hang
out there.
Post by spir
Is there a way to accord concatenation and basic OO? Or are these
approaches -- as I suspect -- definitely incompatible?
They're not incompatible. Many Forths have objects, Factor has
objects, and I believe there was talk of adding a prototype-based
object system to Cat at one point as well. If you want to reimplement
Io in Factor, you'd be more than able to. I'd encourage you to try the
existing system first though and see what you think.
Factor's OO system isn't based on message passing. Generic words are
words that can have methods defined on them for certain classes, like
Lisp. But multiple dispatch mostly isn't used (yet); there's a library
for multiple dispatch that's not in core.

Factor's object system used to be a unique hybrid between being
class-based and including delegation, but now it's more traditional.
There are things that are like structs, that can have single
inheritance among each other. Then, there are mixins, which are really
extensible union classes that take the place of declaring that a class
implements a certain interface. (It's not actually enforced that the
class does implement the interface.) There are also predicate classes,
singleton classes and a few other convenient things. The library
defines more (eg multimethods, delegation), since all the internals of
the object system are fully hackable.

I think Eduardo Cavazos wrote a prototype-based OO system in Factor,
but I'm not sure if it's still around. Generally, Factor is flexible
enough that you can implement whatever object system you want as a
library, and it can be made pretty efficient. Factor's object system
used to be just a library, but now it gets special help from the
compiler (type inference is used to reduce dispatch, etc), so it can't
really be considered that way now.

Dan
spir
2009-03-03 09:56:56 UTC
Permalink
Le Mon, 2 Mar 2009 16:23:15 -0500,
Post by John Nowak
Post by spir
I intuitively understand "concatenative" as meaning "constructive"
in the sense of "brick programming". This is what I have always
looked for, probably. Still, I do not really get why some languages
may be so qualified; why most languages do not have this property.
The Wikipedia article was completely changed a few days ago and now
http://en.wikipedia.org/wiki/Concatenative_programming_language
Thank you for your extensive reply.
I have read this page numerous times ;-)
Post by John Nowak
Post by spir
In stack based langugages,
:W 1
actually means "push 1 (whatever it actually is) on top of the stack".
In a "real" OO language (based on messaged passing, such as usually
done in prototypes languages like Io), a reference to a data slot
actually calls a data accessor.
Is this analog to "pushing on the stack" ?
I'm not sure what you mean exactly, but I think the answer is probably
no.
What I mean is (letting aside the notion of object to which a message are sent -- say that there a unique global object or talk of plain functions).

:plus <whatever> ;
:plus1 1 plus ;

Can we say that the code above is equivalent to:

func plus(a,b)
<whatever>
func plus1(n)
plus(n, 1)

or more explicitely:


:plus <whatever> push; # 'push' means push result on the stack
:plus1 1 push plus ; # result already on the stack

func plus(a,b)
<whatever>
resturn result
func plus1(n)
# equivalent to pushing 'data'?
x := n
y := 1
# then only call other func
plus(x, y)

I wonder if the functional way and the stack way are equivalent manners of defining code snippets -- that use variables in the math sense of "unpredictable values" (at design time) -- that are secure; meaning that they will perform a fully defined action and/or are concatenatable as code bricks.
Post by John Nowak
Post by spir
This leads me to wonder whether concatenation is related to
referential transparency.
In Joy, most functions are referentially transparent. Depending on how
you look at it, you might even say that all functions are
referentially transparent because the compositional nature of the code
makes it possible to thread a single invisible state throughout the
entire program. Some degree of hand-waving is required however.
[...]
Post by John Nowak
Post by spir
What lets me uneasy with this point of view is that "concatenative"
is not supposed to be a synonym of "purely functional". Or what?
It depends on the language. For example, all data is persistent in
Joy; only IO will cause a visible side effect. In a language like
Factor, much data is not persistent.
Post by spir
Are purely functional languages necessarily, logically, concatenative?
Not at all. Haskell is purely functional (i.e. there are *no* side
effects), but not concatenative.
Still unclear for me. (good, I have unknown worlds to explore ;-)
Post by John Nowak
Post by spir
I wonder about the practical usefulness of the concatenative
propertie(s).
They're useful. Tons of code is written in Forth.
[...]
I expressed myself unclearly.
Post by John Nowak
Post by spir
Is there a way to accord concatenation and basic OO? Or are these
approaches -- as I suspect -- definitely incompatible?
They're not incompatible. Many Forths have objects, Factor has
objects, and I believe there was talk of adding a prototype-based
object system to Cat at one point as well. If you want to reimplement
Io in Factor, you'd be more than able to. I'd encourage you to try the
existing system first though and see what you think.
This is something I really do not understand. In many OO languages, especially prototype-based ones, objects are simple map/dict like variables. They can be refered to from inside any word/func/'action'; so as I understand the point they act, at least can act, as global state, environment, variables that will necessarily break the (referential) "safety" of the function. Meaning that they have unpredictable effect/result. How can one then simply combine such functions as code bricks -- which is how I understand concatenativity?
There must be something my brain is unable to catch...
Post by John Nowak
- John
Thank you again,
Denis
------
la vita e estrany
John Nowak
2009-03-03 10:14:19 UTC
Permalink
Post by spir
:plus <whatever> ;
:plus1 1 plus ;
func plus(a,b)
<whatever>
func plus1(n)
plus(n, 1)
I'm not sure this'll help, but I'll try.

In a stack-based concatenative language, every function takes a stack
as input and returns a new stack as a result. Accordingly, we might
implement the 'plus' function of a concatenative language in Scheme as
such:

(lambda (s) (cons (+ (car s) (cadr s)) (cddr s)))

Another example would be 'dup':

(lambda (s) (cons (car s) s))

And 'dip':

(lambda (s) (cons (cadr s) ((car s) (cddr s))))

Of course, if you're not familiar with Lisp, I've just confused you
even more.
Post by spir
Post by John Nowak
Not at all. Haskell is purely functional (i.e. there are *no* side
effects), but not concatenative.
Still unclear for me. (good, I have unknown worlds to explore ;-)
In Haskell, not all terms denote functions. In concatenative
languages, all terms denote functions. In Haskell, juxtaposition
denotes application. In concatenative languages, juxtaposition denotes
composition.
Post by spir
This is something I really do not understand. In many OO languages,
especially prototype-based ones, objects are simple map/dict like
variables. They can be refered to from inside any word/
func/'action'; so as I understand the point they act, at least can
act, as global state, environment, variables that will necessarily
break the (referential) "safety" of the function.
The object systems in Factor and Forth do break referential
transparency. If this means that are not "purely concatenative"
depends on who you ask.

- John
William Tanksley
2009-03-03 15:11:21 UTC
Permalink
Post by John Nowak
The object systems in Factor and Forth do break referential
transparency. If this means that are not "purely concatenative"
depends on who you ask.
Thanks for taking the time to explain; I agree with most of what you've
said, but this is false.

Forth doesn't have any "the object system"; some of the open-source
alternatives available do destroy local concatenativity, but most of
them don't.

Factor's official object system is definitely fully concatenative. It
won't matter who you ask; if they disagree with me, they're wrong. ;-)

Technical point: Referential Transparency in a concatenative language is
different from the same concept in an applicative language. In one
sense, it's harder to use, since you can't simply move words from one
place to another (without rearranging the stack); in another sense it's
easier to use, since you can "factor out" a word without having to do
any work at all (cut and paste is sufficient). Unlike applicative
languages, you do NOT have to worry about functional state; the fact
that the stack acts as a monad means that the state is already
preserved; even I/O can be modelled properly.
Post by John Nowak
- John
-Wm
John Nowak
2009-03-03 19:16:18 UTC
Permalink
Post by William Tanksley
Factor's official object system is definitely fully concatenative. It
won't matter who you ask; if they disagree with me, they're wrong. ;-)
Technical point: Referential Transparency in a concatenative
language is
different from the same concept in an applicative language. In one
sense, it's harder to use, since you can't simply move words from one
place to another (without rearranging the stack); in another sense it's
easier to use, since you can "factor out" a word without having to do
any work at all (cut and paste is sufficient). Unlike applicative
languages, you do NOT have to worry about functional state; the fact
that the stack acts as a monad means that the state is already
preserved; even I/O can be modelled properly.
Not everyone agrees on this point. Take a look at this thread:

http://lambda-the-ultimate.org/node/3050

Many would say Joy is not purely functional and that the "monadic
stack" argument essentially amounts to hand-waving. If you take this
view, then the question of Joy or Factor being "purely concatenative"
depends on if you require concatenative languages to be "purely
functional".

Joy isn't "pure enough" to allow evaluation via postfix term rewriting
in an arbitrary order. If you had a type system that could enforce
something similar to the I/O monad in Haskell, it would be. I
definitely think there's some use in distinguishing between the
relative purity of both approaches.

I feel that being able to evaluate in any order without worrying about
side effects is somehow "more concatenative" than having to evaluate
left to right. Rather than make a claim here, I tried to sidestep the
issue to avoid contention. I see I failed once again! :)

- John
William Tanksley
2009-03-04 21:59:59 UTC
Permalink
Post by John Nowak
Post by William Tanksley
Technical point: Referential Transparency in a concatenative
language is
different from the same concept in an applicative language. In one
sense, it's harder to use, since you can't simply move words from one
place to another (without rearranging the stack); in another sense it's
easier to use, since you can "factor out" a word without having to do
any work at all (cut and paste is sufficient). Unlike applicative
languages, you do NOT have to worry about functional state; the fact
that the stack acts as a monad means that the state is already
preserved; even I/O can be modelled properly.
http://lambda-the-ultimate.org/node/3050
I think they're trying to discuss how not all concatenative languages
need to be functional and vice versa... A point I've made myself just
recently. But you're right to correct me when I use the technical term
"referential transparency" in an inappropriate way. Only purely
functional languages can be referentially transparent.
Post by John Nowak
Many would say Joy is not purely functional
I agree with them.
Post by John Nowak
and that the "monadic
stack" argument essentially amounts to hand-waving.
There's a lot more details to be worked out, but the essence is true --
linear logic with data items on a stack impose an order on computation.
Extracting that order is not a completely solved problem, and it's
always been an item of interest to me.

I'm often overly impatient about figuring it out -- that makes me say
things like "the stack is a monad, concatenative languages need nothing
else." I'm wrong to say that; it's more complicated than that. But a
naive concatenative language (i.e. one that executes strictly in order)
needs nothing else; and a sophisticated one needs only as much
sophistication as its out-of-order execution can handle.
Post by John Nowak
If you take this
view, then the question of Joy or Factor being "purely concatenative"
depends on if you require concatenative languages to be "purely
functional".
As you can see, I don't require that. My definition of "concatenative"
implies and requires associativity in syntax and semantics; it doesn't
mention functionality. A purely concatenative language may be highly
stateful, and thus not a functional language. It may be zeroeth order,
and therefore not a function-level language. It's still concatenative.

Joy is supposed to be somewhat functional -- if von Thun had intended it
to be purely functional he would have done something different with I/O.
Factor makes no claims as to functionality, just practicality, so the
"question" has never been raised.
Post by John Nowak
Joy isn't "pure enough" to allow evaluation via postfix term rewriting
in an arbitrary order. If you had a type system that could enforce
something similar to the I/O monad in Haskell, it would be. I
definitely think there's some use in distinguishing between the
relative purity of both approaches.
I entirely agree; that's why it's essential to maintain a distinction
between "purely functional" and "purely concatenative". You can have
both, neither, or either. There are benefits and prices for all
combinations.
Post by John Nowak
I feel that being able to evaluate in any order without worrying about
side effects is somehow "more concatenative" than having to evaluate
left to right. Rather than make a claim here, I tried to sidestep the
issue to avoid contention. I see I failed once again! :)
You'll not escape me! :)

I'd disagree that it's more concatenative to be able to evaluate in any
order. It is certainly more functional (or declarative), though. And
there IS value to it. The problem is that there's also a price to be
paid, and not every programmer wants to pay it.

In reality, the stack is not the only thing that gets passed to every
function. In every practical concatenative language so far invented
there's always something else -- sometimes a dictionary, sometimes an
I/O device... Often an entire virtual machine. There's value in
simplifying this machine (following a functional approach), and there's
value in clearly characterizing it (following a formal approach). One
need not do either.

For an example of how a more functional approach simplified a language,
consider Joy's introduction of 'dip', which entirely eliminated the need
for Forth's programmer-editable return stack. Of course, the return
stack or something like it is still hiding in the background; stevan's
XY language makes its replacement explicit, but in both cases it remains
a part of the global state, just like the data stack.
Post by John Nowak
- John
-Wm
John Nowak
2009-03-04 22:37:05 UTC
Permalink
Post by William Tanksley
I'm often overly impatient about figuring it out -- that makes me say
things like "the stack is a monad, concatenative languages need nothing
else." I'm wrong to say that; it's more complicated than that. But a
naive concatenative language (i.e. one that executes strictly in order)
needs nothing else; and a sophisticated one needs only as much
sophistication as its out-of-order execution can handle.
You're right, but I don't think this applies just to concatenative
languages. I may be misunderstanding what you're saying, but let me
clarify my point anyway.

Take the Scheme expression '(+ (+ 1 2) (+ 2 3))'. Instead of '()'
meaning application, we'll have it form a function that, when applied
to a world value, returns a tuple of a new world value and the result
of the expression. To reduce within a function formed by '()', the
function receiving the world would pass it to the left-most '()'
within it, get the tuple of the world and result back, save the
result, and pass the world to the next '()' within it. At the top
level of our program, we'll apply our function to the world value to
kick things off.

What I'm trying to say is that, with a bit of hand-waving, you can
make any language "purely functional" just by describing how the world
value would be threaded through the application. It should be obvious
though that this isn't particularly useful if it lets us claim that,
say, C is purely functional (which it would).

Threading a world value around is only useful if you *don't* pass it
to everything. The functions that don't need it can be evaluated and
reasoned about without caring about it.

You are 100% right that the monadic nature of concatenative code makes
linearity easier. Henry Baker would certainly agree. I think it also
may have the potential to make Haskell-style monadic code more natural
thanks to the left-to-right syntax.

I'm not even sure if I'm disagreeing with you; I just think we should
probably be a bit more careful with our claims. I'm certainly included
in that "we". In fact, I may have made some bogus claims related to
this very topic not so long ago.
Post by William Tanksley
Joy is supposed to be somewhat functional -- if von Thun had
intended it
to be purely functional he would have done something different with I/O.
Manfred von Thun has, as far as I know, always claimed that Joy is
purely functional. Numerous articles on the Joy site make this claim,
the Wikipedia article for Joy makes this claim, et cetera. After
giving this some thought, I really do think it is misleading. Perhaps
Manfred meant to exclude IO from him claim; I've seen some evidence
for this, namely in the interview on NSL, but I don't want to guess at
his meaning.
Post by William Tanksley
I entirely agree; that's why it's essential to maintain a distinction
between "purely functional" and "purely concatenative". You can have
both, neither, or either. There are benefits and prices for all
combinations.
I completely agree with your distinction.
Post by William Tanksley
Post by John Nowak
I feel that being able to evaluate in any order without worrying about
side effects is somehow "more concatenative" than having to evaluate
left to right. Rather than make a claim here, I tried to sidestep the
issue to avoid contention.
I'd disagree that it's more concatenative to be able to evaluate in any
order. It is certainly more functional (or declarative), though. And
there IS value to it. The problem is that there's also a price to be
paid, and not every programmer wants to pay it.
Again, I agree. This is a good way of breaking down the situation.

- John
John Cowan
2009-03-04 22:55:19 UTC
Permalink
Post by John Nowak
Manfred von Thun has, as far as I know, always claimed that Joy is
purely functional. Numerous articles on the Joy site make this claim,
the Wikipedia article for Joy makes this claim, et cetera. After
giving this some thought, I really do think it is misleading. Perhaps
Manfred meant to exclude IO from him claim; I've seen some evidence
for this, namely in the interview on NSL, but I don't want to guess at
his meaning.
To be fair, most of those claims predate the addition of input to Joy,
which happened when I reworked Joy0 into Joy1, the current version
with the file I/O, random numbers, clock time, and command-line args.
Before that, the only I/O function was ".", which outputs the top of
stack; by itself, that doesn't make Joy0 not purely functional.
--
Barry gules and argent of seven and six, John Cowan
on a canton azure fifty molets of the second. ***@ccil.org
--blazoning the U.S. flag http://www.ccil.org/~cowan
John Nowak
2009-03-04 23:05:38 UTC
Permalink
Post by John Cowan
To be fair, most of those claims predate the addition of input to Joy,
which happened when I reworked Joy0 into Joy1, the current version
with the file I/O, random numbers, clock time, and command-line args.
Before that, the only I/O function was ".", which outputs the top of
stack; by itself, that doesn't make Joy0 not purely functional.
Good to know. Relatively speaking, I'm late to the game here.

- John
William Tanksley
2009-03-03 16:05:02 UTC
Permalink
Post by spir
I wonder if the functional way and the stack way are equivalent
manners of defining code snippets -- that use variables in the math
sense of "unpredictable values" (at design time) -- that are secure;
meaning that they will perform a fully defined action and/or are
concatenatable as code bricks.
The best way to say that kind of thing -- assuming that I understand
what you're asking for -- is to provide a constructive method to
transform one representation into another. Nowak pointed you at Kerby's
web page earlier; that includes an algorithm to convert lambda notation
into stack notation, thus proving that the two are equivalent (assuming
that you have a complete basis in both).
Post by spir
Post by John Nowak
Post by spir
This leads me to wonder whether concatenation is related to
referential transparency.
What lets me uneasy with this point of view is that
"concatenative"
Post by John Nowak
Post by spir
is not supposed to be a synonym of "purely functional". Or what?
It's not. Joy attempts to be a partially pure functional language that's
purely concatenative; Cat and 5th try much harder to be functionally
pure. My little toy language achieves complete functional purity by the
expedient mechanism of being completely useless for any practical
purpose.

But Forth and Postscript are (reasonably) concatenative, and are not
functionally pure at all.

"Concatenative" and "functional" are two completely different adjectives
that can be applied completely independently. The only thing they share
in common is that both allow you to make mathematical statements about
your code -- and if your language is "pure" enough, those statements
help you reason about your code.
Post by spir
Post by John Nowak
Post by spir
I wonder about the practical usefulness of the concatenative
propertie(s).
They're useful. Tons of code is written in Forth.
I expressed myself unclearly.
To me, the most useful property is that concatenative code is fully
associative. This implies that you can group concatenative code in any
way that's convenient for you. If you're reading the code, you can "give
up" on a complex section of code and skip ahead to try to make sense of
a later section; what you discover will be correct no matter what the
earlier section actually meant. If you're writing or refactoring the
code, it means that you can "physically" group the code any way you
want: just cut any section out, replace it with a new function name, and
paste the old section into the definition of the new function.

One mathematical property, two uses.
Post by spir
This is something I really do not understand. In many OO languages,
especially prototype-based ones, objects are simple map/dict like
variables. They can be refered to from inside any word/func/'action';
so as I understand the point they act, at least can act, as global
state, environment, variables that will necessarily break the
(referential) "safety" of the function. Meaning that they have
unpredictable effect/result. How can one then simply combine such
functions as code bricks -- which is how I understand concatenativity?
There must be something my brain is unable to catch...
The truly pure functional languages, like Haskell, have a mathematical
model for state-changing effects, like objects and I/O. The problem is
that purely functional languages don't express what order their
operations will occur in; they just tell the computer what operations to
do, and let the compiler figure out the best order. So Haskell provides
a mathematical construct called a "monad". Monads can mathematically
only be evaluated in order, so the compiler receives enough information
to know what order to execute your code in.

Concatenative languages use a stack (or something like that), and as a
result they already have a monad built into them. If you do something
that's functionally unsafe (stateful) you're still concatenatively safe,
so your associative property still applies.

(However, state-free concatenative languages DO have some nice
additional properties.)
Post by spir
Denis
-Wm
John Cowan
2009-03-03 18:15:18 UTC
Permalink
Post by William Tanksley
The truly pure functional languages, like Haskell, have a mathematical
model for state-changing effects, like objects and I/O. The problem is
that purely functional languages don't express what order their
operations will occur in; they just tell the computer what operations to
do, and let the compiler figure out the best order. So Haskell provides
a mathematical construct called a "monad". Monads can mathematically
only be evaluated in order, so the compiler receives enough information
to know what order to execute your code in.
You don't really need monads: they are just a convenience, because
it would be painful to thread state through every function in
your program just in case a callee wants to use some of it; see
http://lambda-the-ultimate.org/node/1155#comment-12726 for details.
In Joy, the equivalent of the I/O monad is the simple fact that "read"
has a stack shape of F -> F O, preserving the F object on the stack.
--
One Word to write them all, John Cowan <***@ccil.org>
One Access to find them, http://www.ccil.org/~cowan
One Excel to count them all,
And thus to Windows bind them. --Mike Champion
Rodney D Price
2009-03-03 19:00:56 UTC
Permalink
Haskell *does* need a monad to do I/O. The essential
difference between "read" having a stack shape of
F -> F O, and Haskell's IO monad, is that the Haskell
IO monad is guaranteed to confine any side effects to
the IO monad; that is, the result of an IO operation will
always have a type of the form IO <something>. And
there's no way any result depending on an I/O operation
can avoid having type IO <something>. There's no
escape from the IO monad in Haskell. Lacking a static
type system, Joy offers no such guarantee.

Cat, OTOH, has a Hindley-Milner static type system.
Perhaps Cat could do monads.

-Rod
Post by William Tanksley
The truly pure functional languages, like Haskell, have a mathematical
model for state-changing effects, like objects and I/O. The problem is
that purely functional languages don't express what order their
operations will occur in; they just tell the computer what operations to
do, and let the compiler figure out the best order. So Haskell provides
a mathematical construct called a "monad". Monads can mathematically
only be evaluated in order, so the compiler receives enough information
to know what order to execute your code in.
You don't really need monads: they are just a convenience, because
it would be painful to thread state through every function in
your program just in case a callee wants to use some of it; see
http://lambda-the-ultimate.org/node/1155#comment-12726 for details.
In Joy, the equivalent of the I/O monad is the simple fact that "read"
has a stack shape of F -> F O, preserving the F object on the stack.
--
One Word to write them all, John Cowan <***@ccil.org>
One Access to find them, http://www.ccil.org/~cowan
One Excel to count them all,
And thus to Windows bind them. --Mike Champion





[Non-text portions of this message have been removed]
William Tanksley
2009-03-03 19:23:56 UTC
Permalink
Post by Rodney D Price
can avoid having type IO <something>. There's no
escape from the IO monad in Haskell. Lacking a static
type system, Joy offers no such guarantee.
Actually, Joy has a dynamic type system that DOES make that guarantee.
That "F" type on the output of 'read' means that it truly is a file, and
the only way to muck with it is through the file itself.
Post by Rodney D Price
Cat, OTOH, has a Hindley-Milner static type system.
Perhaps Cat could do monads.
My claim was that the stack is already a monad, so that nothing further
was needed. John pointed out correctly that it was also important to
return the thing whose state was being modified (thank you, you're
absolutely correct).

I'd now add that you have to have some assurance that there aren't
references to the same state being manipulated "at the same time". That
must be provided by the language's copy and reference semantics, as well
as the semantics of the library responsible for state mutation (in this
example, the I/O library).

I think this means -- in general -- that you have to have custom code
run when duplication and/or destruction happens. In specific, the
optimizer has to either know about your mutable types, or be completely
naive.

And here's another difference between pure functional languages and
concatenative languages, as a class: a naive implementation of a
concatenative language (i.e. without any optimizer whatsoever) will run
code reasonably quickly. Forth is a prime example; Joy is somewhat of an
example, although its extensive use of functional abilities slows it
down a bit.
Post by Rodney D Price
-Rod
-Wm
Robbert Dalen
2009-03-03 21:17:02 UTC
Permalink
Post by William Tanksley
And here's another difference between pure functional languages and
concatenative languages, as a class: a naive implementation of a
concatenative language (i.e. without any optimizer whatsoever) will run
code reasonably quickly. Forth is a prime example; Joy is somewhat of an
example, although its extensive use of functional abilities slows it
down a bit.
enchilada is purely functional *by design*.... without using monads or
uniqueness types.
enchilada also happens to be 'concatenative' - whatever that means :)

i believe that marrying the two traits, purely functional and
concatenative, will give us the nice properties we want, i.e.

you can factor out any valid (sub)expression and replace that with
a ...word.... (denoting the factored expression).

i believe this property also holds for joy, if it is stripped from
it's file manipulation primitives.
Post by William Tanksley
-Rod
-Wm
- robbert.
------------------------------------

Yahoo! Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/concatenative/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/concatenative/join
(Yahoo! ID required)

<*> To change settings via email:
mailto:concatenative-***@yahoogroups.com
mailto:concatenative-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
concatenative-***@yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
John Nowak
2009-03-03 21:41:19 UTC
Permalink
Post by Robbert Dalen
enchilada is purely functional *by design*.... without using monads or
uniqueness types.
I don't believe you! Can you demonstrate how a "hello world" program
would work?
Post by Robbert Dalen
you can factor out any valid (sub)expression and replace that with
a ...word.... (denoting the factored expression). i believe this
property also holds for joy, if it is stripped from it's file
manipulation primitives.
This property holds for Joy in all cases; the impure IO doesn't make
any difference. What you can't do is replace an expression with the
result of evaluating it, e.g., you can't replace '1 2 print' with '1'.
This is another reason why I think it is not useful to say Joy is
referentially transparent.

- John
Robbert Dalen
2009-03-03 21:54:02 UTC
Permalink
Post by Robbert Dalen
Post by Robbert Dalen
enchilada is purely functional *by design*.... without using
monads or
Post by Robbert Dalen
uniqueness types.
I don't believe you! Can you demonstrate how a "hello world" program
would work?
"hello world"

or ["hello"] ["world"] | .

the interpreter is a pure function that takes an input expression and
outputs the (rewritten) normal form of that input expression

also, the interpreter itself is also purely functional.
Post by Robbert Dalen
Post by Robbert Dalen
you can factor out any valid (sub)expression and replace that with
a ...word.... (denoting the factored expression). i believe this
property also holds for joy, if it is stripped from it's file
manipulation primitives.
This property holds for Joy in all cases; the impure IO doesn't make
any difference. What you can't do is replace an expression with the
result of evaluating it, e.g., you can't replace '1 2 print' with '1'.
This is another reason why I think it is not useful to say Joy is
referentially transparent.
but what if you construct a quotation from a file and then execute it?
this piece of concatenative code cannot be replaced by a word, because
the file can be different at every invocation.
Post by Robbert Dalen
- John
------------------------------------

Yahoo! Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/concatenative/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/concatenative/join
(Yahoo! ID required)

<*> To change settings via email:
mailto:concatenative-***@yahoogroups.com
mailto:concatenative-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
concatenative-***@yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
John Nowak
2009-03-03 22:09:33 UTC
Permalink
Post by John Nowak
Post by John Nowak
I don't believe you! Can you demonstrate how a "hello world" program
would work?
"hello world"
Maybe a bad example.

Can you demonstrate how to read a line from a file and echo it to the
user?
Post by John Nowak
Post by John Nowak
This property holds for Joy in all cases; the impure IO doesn't make
any difference. What you can't do is replace an expression with the
result of evaluating it, e.g., you can't replace '1 2 print' with '1'.
This is another reason why I think it is not useful to say Joy is
referentially transparent.
but what if you construct a quotation from a file and then execute it?
this piece of concatenative code cannot be replaced by a word, because
the file can be different at every invocation.
I'm saying that you can factor out any subexpression. Any
subexpression is a function. It makes sense to say that you can name
this function and use the definition of the function instead of its
name. (Or, at least you could if quotations were closed.)

Constructing a quotation from a file is another story. You're talking
about referential transparency; replacing an expression with its
*value*. I'm talking about replacing an expression with a name for
that expression.

- John
Robbert Dalen
2009-03-03 22:25:18 UTC
Permalink
Post by John Nowak
Post by John Nowak
"hello world"
Maybe a bad example.
Can you demonstrate how to read a line from a file and echo it to the
user?
you cannot.
i've complete eradicated files :)

instead you have to feed the file to the interpreter as an argument.

that said, enchilada doesn't need files.
it has a more elaborate (distributed) storage system that is
referentially transparent.
Post by John Nowak
Constructing a quotation from a file is another story. You're talking
about referential transparency; replacing an expression with its
*value*. I'm talking about replacing an expression with a name for
that expression.
you are right.
to construct a quotation from a file you first need to evaluate the
file primitive.

- robbert
John Nowak
2009-03-03 22:30:19 UTC
Permalink
Post by Robbert Dalen
Post by John Nowak
Can you demonstrate how to read a line from a file and echo it to the
user?
you cannot.
You could with an IO monad or uniqueness types! :)

- John
Robbert Dalen
2009-03-03 22:55:52 UTC
Permalink
Post by John Nowak
Post by Robbert Dalen
Post by John Nowak
Can you demonstrate how to read a line from a file and echo it to
the
Post by Robbert Dalen
Post by John Nowak
user?
you cannot.
You could with an IO monad or uniqueness types! :)
if files were immutable you don't need the IO monad or uniqueness
types! :)

i had my fare share of the IO monad and uniqueness types when trying
to implement enchilada in haskell and concurrent clean,

my experience was that monads totally mess up (otherwise beautiful
functional) code, to the point of that it becomes almost unreadable.

uniqueness types are much better, but there aren't many systems out
there (except clean) that implement that.

- robbert.
John Nowak
2009-03-03 19:31:39 UTC
Permalink
Post by Rodney D Price
Haskell *does* need a monad to do I/O.
Haskell doesn't need a monad to do I/O. The alternative would be ugly
as Mr. Cowan said, but it's not a requirement.

The easiest way to do it would be to pass a "world value" into the
program and then use uniqueness types to ensure no I/O actions are
performed when there exists more than one reference to the world
value. In this approach, you'd have to manually plumb the world state
around instead of using monads.

- John
John Cowan
2009-03-03 20:57:55 UTC
Permalink
Post by John Nowak
The easiest way to do it would be to pass a "world value" into the
program and then use uniqueness types to ensure no I/O actions are
performed when there exists more than one reference to the world
value. In this approach, you'd have to manually plumb the world state
around instead of using monads.
That's what Clean does, correct?
--
Unless it was by accident that I had John Cowan
offended someone, I never apologized. ***@ccil.org
--Quentin Crisp http://www.ccil.org/~cowan
John Nowak
2009-03-03 21:14:46 UTC
Permalink
Post by John Cowan
Post by John Nowak
The easiest way to do it would be to pass a "world value" into the
program and then use uniqueness types to ensure no I/O actions are
performed when there exists more than one reference to the world
value. In this approach, you'd have to manually plumb the world state
around instead of using monads.
That's what Clean does, correct?
Yes. The 'Start' function in your program makes this clear on the type
level:

Start :: *World -> *World
Start world = startIO NDI Void initialise [] world

In my personal opinion, the monadic approach in Haskell is much
cleaner. I'd still like to see uniqueness types added to Haskell
though for things like destructive array update; I find uniqueness
types preferable to monads in such cases. (Well, really, I just want
to see Disciple Haskell become usable.)

In a concatenative language, one thing you can do as add the 'world'
to the top of the stack and have all operations work below it except
those that perform IO. This does let you sort of hand-wave and claim
the language is purely functional. The problem is that *everything*
now has a forced evaluation order, not just those things that perform
IO. In other words, even though your language is purely functional,
you're still essentially stuck with having to use eager evaluation.
This is why I claim this approach is mostly hand-waving; it doesn't
let you do anything different than you otherwise would.

Ideally, you'd handle the world value explicitly; this would make it
clear which things can be reduced in an arbitrary order and which
things cannot. You'd need to add a type system to ensure you don't
violate assumptions about how the world value is used however. If you
were allowed to duplicate the world value (or put it in an aggregate
and then duplicate the aggregate), bad things would happen.

- John
John Cowan
2009-03-03 21:35:38 UTC
Permalink
Post by John Nowak
In a concatenative language, one thing you can do as add the 'world'
to the top of the stack and have all operations work below it except
those that perform IO. This does let you sort of hand-wave and claim
the language is purely functional. The problem is that *everything*
now has a forced evaluation order, not just those things that perform
IO. In other words, even though your language is purely functional,
you're still essentially stuck with having to use eager evaluation.
This is why I claim this approach is mostly hand-waving; it doesn't
let you do anything different than you otherwise would.
Nobody ever claimed that Joy was lazy; "pure functional" does not mean
"pure, functional, and lazy". Lazy languages have their virtues, but
so do eager ones.

Similarly, it has a type system, just not a static type system. Those too
have their disadvantages: I find it useful that languages like Scheme
and Joy can have fully heterogeneous lists.
--
May the hair on your toes never fall out! John Cowan
--Thorin Oakenshield (to Bilbo) ***@ccil.org
John Nowak
2009-03-03 21:50:17 UTC
Permalink
Post by John Cowan
Post by John Nowak
In a concatenative language, one thing you can do as add the 'world'
to the top of the stack and have all operations work below it except
those that perform IO. This does let you sort of hand-wave and claim
the language is purely functional. The problem is that *everything*
now has a forced evaluation order, not just those things that perform
IO. In other words, even though your language is purely functional,
you're still essentially stuck with having to use eager evaluation.
This is why I claim this approach is mostly hand-waving; it doesn't
let you do anything different than you otherwise would.
Nobody ever claimed that Joy was lazy; "pure functional" does not mean
"pure, functional, and lazy". Lazy languages have their virtues, but
so do eager ones.
I never claimed anyone ever claimed Joy was lazy. My claim was
essentially that, if Joy were purely functional in the same sense
Haskell was, you *could* evaluate it lazily if you wanted to. Given
that you cannot do so, I don't think it is correct to call it "purely
functional". The reason for this is that it does not have the property
that you can replace an expression with the result of evaluating it as
Haskell does (i.e. referential transparency). Such a property is what
enables sensible lazy evaluation.

I'm aware eager languages have their virtues. I did after all say I'd
prefer Disciple Haskell (which is eager and uses an effect system to
control effects instead of monads).
Post by John Cowan
Similarly, it has a type system, just not a static type system.
I'll clarify then: You need a static type system to enforce that the
world value is used properly.

The only possible alternative I can think of is to reference count
everything and error if attempting to use the world value when it has
more than one reference to it. Unfortunately, this approach is
unsatisfactory in a number of ways that I could elaborate on if anyone
is interested.

- John
John Cowan
2009-03-03 22:12:02 UTC
Permalink
Post by John Nowak
The only possible alternative I can think of is to reference count
everything and error if attempting to use the world value when it has
more than one reference to it. Unfortunately, this approach is
unsatisfactory in a number of ways that I could elaborate on if anyone
is interested.
Hmm. Joy doesn't do that: it just only lets you have pointers to the
world, and you can copy them all you want, but there is only one world.
(Actually, there is one world per open file, so funky interactions between
files are still possible, as well as the case of reading in a list
and then executing it.)
--
Evolutionary psychology is the theory John Cowan
that men are nothing but horn-dogs, http://www.ccil.org/~cowan
and that women only want them for their money. ***@ccil.org
--Susan McCarthy (adapted)
John Nowak
2009-03-03 22:18:16 UTC
Permalink
Post by John Cowan
Post by John Nowak
The only possible alternative I can think of is to reference count
everything and error if attempting to use the world value when it has
more than one reference to it. Unfortunately, this approach is
unsatisfactory in a number of ways that I could elaborate on if anyone
is interested.
Hmm. Joy doesn't do that: it just only lets you have pointers to the
world, and you can copy them all you want, but there is only one world.
(Actually, there is one world per open file, so funky interactions between
files are still possible, as well as the case of reading in a list
and then executing it.)
Aye. In order to have purity in the same sense as Haskell, you need
exactly one world. A function like 'read_file' would need to take both
the appropriate file handle and the world object. Allowing multiple
references to the same world object through pointers (as Joy would
have to) would break things. This is why you'd need the static type
system

- John
Robbert Dalen
2009-03-03 22:17:00 UTC
Permalink
Post by John Nowak
I never claimed anyone ever claimed Joy was lazy. My claim was
essentially that, if Joy were purely functional in the same sense
Haskell was, you *could* evaluate it lazily if you wanted to. Given
that you cannot do so, I don't think it is correct to call it "purely
functional". The reason for this is that it does not have the property
that you can replace an expression with the result of evaluating it as
Haskell does (i.e. referential transparency). Such a property is what
enables sensible lazy evaluation.
i think referential transparency doesn't mean that.
it means that a function should always return the same output given
the same input.
in that sense, joy is referentially transparent.

and joy could be lazy (just as enchilada)
Post by John Nowak
- John
- robbert
John Nowak
2009-03-03 22:28:52 UTC
Permalink
Post by Robbert Dalen
Post by John Nowak
I never claimed anyone ever claimed Joy was lazy. My claim was
essentially that, if Joy were purely functional in the same sense
Haskell was, you *could* evaluate it lazily if you wanted to. Given
that you cannot do so, I don't think it is correct to call it "purely
functional". The reason for this is that it does not have the
property
that you can replace an expression with the result of evaluating it as
Haskell does (i.e. referential transparency). Such a property is what
enables sensible lazy evaluation.
i think referential transparency doesn't mean that.
it means that a function should always return the same output given
the same input.
They're equivalent statements. If a function always returns the same
output given the same input, then you can replace an application of
that function to a given input with the output of that application.

In Joy, neither holds. A function like 'rand' is going to return
different values given the same input. The only way to prevent this is
to pass a world value through the program and have 'rand' require
exclusive access to that world value in order to determine its output.
The only way to do this sensibly (i.e. in a way that would permit
arbitrary order reduction) is with a type system.
Post by Robbert Dalen
and joy could be lazy
No it couldn't. Take the following example where 'print' takes a stack
with a string on top and prints it to the screen:

"foo" print "bar" print (initial state)
"bar" print (eval '"foo" print')
(eval '"bar" print')

In order for this to have consistent behavior, you have to reduce in
an eager manner.

Now take this example where 'print2' takes a stack with a string as
the second item and a world value on the top and returns a new world
as its result:

"foo" `worldA print2 "bar" swap print2 (initial state)
`worldB "bar" swap print2 (eval '"foo" `worldA
print2')
"bar" `worldB print2 (eval 'swap')
`worldC (eval '"bar" `worldB
print2')

- John
Robbert Dalen
2009-03-03 22:49:49 UTC
Permalink
Post by John Nowak
In Joy, neither holds. A function like 'rand' is going to return
different values given the same input. The only way to prevent this is
to pass a world value through the program and have 'rand' require
exclusive access to that world value in order to determine its output.
The only way to do this sensibly (i.e. in a way that would permit
arbitrary order reduction) is with a type system.
but rand is not pure.
i'm talking about joy with only pure functions.
you can create randomness purely.

for instance, you can take the sha1 hash of any value to create a
random number.

this random number can again and again be hashed, for instance to
create a random walk.
Post by John Nowak
Post by Robbert Dalen
and joy could be lazy
No it couldn't. Take the following example where 'print' takes a stack
"foo" print "bar" print (initial state)
"bar" print (eval '"foo" print')
(eval '"bar" print')
again, print is not pure.

i do think that you are right that you can run into problems if you
make (the possibly evaluation) of an expression *observable*.
this is only problematic with 'open' joy quotations, for instance:

[1 2 +] first
1

is different from:

[3] first
3
however, this problem could be solved if first was be defined as:

- to first evaluate the quotation to normal form
- and then take the first element of normal form

- robbert.
John Nowak
2009-03-03 22:57:56 UTC
Permalink
Post by Robbert Dalen
Post by John Nowak
In Joy, neither holds. A function like 'rand' is going to return
different values given the same input. The only way to prevent this is
to pass a world value through the program and have 'rand' require
exclusive access to that world value in order to determine its output.
The only way to do this sensibly (i.e. in a way that would permit
arbitrary order reduction) is with a type system.
but rand is not pure.
i'm talking about joy with only pure functions.
I thought we were talking about "Joy". Of course I agree that Joy is
pure if you take out all the impure functions. The trouble is that
you're left with a language that's useless.

- John
Robbert Dalen
2009-03-03 23:11:59 UTC
Permalink
Post by John Nowak
I thought we were talking about "Joy". Of course I agree that Joy is
pure if you take out all the impure functions. The trouble is that
you're left with a language that's useless.
*that* sir, is where we disagree!

i am attracted to *pure* joy because of its many (theoretical)
properties.

in a sense, enchilada is essentially a pure joy, with some more
powerful primitives.

i guess my design goals are just different from yours.

you think types will make the difference.
i think immutability will make the difference.

granted, enchilada looks like an experiment, but i'm confident that
immutability will prove *much* more important than types in the near
future.

- robbert
John Nowak
2009-03-03 23:21:00 UTC
Permalink
Post by Robbert Dalen
Post by John Nowak
I thought we were talking about "Joy". Of course I agree that Joy is
pure if you take out all the impure functions. The trouble is that
you're left with a language that's useless.
i am attracted to *pure* joy because of its many (theoretical)
properties... i guess my design goals are just different from yours.
I'm certainly attracted to joy for similar reasons. However, also want
to be able to read and write files on unix. Call me crazy.
Post by Robbert Dalen
granted, enchilada looks like an experiment, but i'm confident that
immutability will prove *much* more important than types in the near
future.
I look forward to it. I'm sure you'll be right if you can make it
work, although I'll admit that I'm skeptical about the "working" part.

Please keep us informed of your progress.

- John
John Cowan
2009-03-04 16:01:53 UTC
Permalink
Post by Robbert Dalen
i do think that you are right that you can run into problems if you
make (the possibly evaluation) of an expression *observable*.
Joy terms consisting of a sequence of expressions in brackets are not
necessarily quotations. They are simply lists, Joy's only general-purpose
data structure (it has strings and sets of very small integers as well,
but the last are hardly used). A quotation is simply a particular
application of a list.

Modern Lisps have had to back off from the similar dual interpretation of
lists as data and code, because of the variable binding issues involved.
Concatenative languages don't have that problem, so code/data duality
remains. But even in Joy, most lists aren't (useful) code at all.
--
Take two turkeys, one goose, four John Cowan
cabbages, but no duck, and mix them http://www.ccil.org/~cowan
together. After one taste, you'll duck ***@ccil.org
soup the rest of your life.
--Groucho
William Tanksley
2009-03-05 20:26:18 UTC
Permalink
Post by John Nowak
Post by John Cowan
Post by John Nowak
In a concatenative language, one thing you can do as add the 'world'
to the top of the stack and have all operations work below it except
those that perform IO. This does let you sort of hand-wave and claim
the language is purely functional. The problem is that *everything*
now has a forced evaluation order, not just those things that perform
IO. In other words, even though your language is purely functional,
you're still essentially stuck with having to use eager evaluation.
I don't see why this approach forces a totally fixed evaluation order.
It seems to me that it forces a fixed order only for words which don't
"work below the world" (I'm not certain precisely what you mean by
that).

In general, you don't have to add the world to the top of the stack; you
can add the I/O to the "state of the system", which includes the stack.
A dumb optimizer has to assume that every operation changes the entire
state; a smarter one knows to look at stack effects; a really smart one
knows to look for I/O effects.
Post by John Nowak
Post by John Cowan
Post by John Nowak
This is why I claim this approach is mostly hand-waving; it doesn't
let you do anything different than you otherwise would.
Nobody ever claimed that Joy was lazy; "pure functional" does not mean
"pure, functional, and lazy". Lazy languages have their virtues, but
so do eager ones.
I never claimed anyone ever claimed Joy was lazy. My claim was
essentially that, if Joy were purely functional in the same sense
Haskell was, you *could* evaluate it lazily if you wanted to. Given
IMO a stronger description would be "out of order execution". You want
an optimizer to be able to decide that one particular word can be
executed prior to a word that appears in the source before it. The
phrase "lazy execution" would seem to imply -- in the context of a
concatenative language -- that a function could leave a "promise" on the
stack. In an applicative language, 'lazy' means that actual parameters
can be bound to such promises.

And you _can_ execute Joy out of order. You just have to watch your
dataflow. Doing so does not depend on whether the code is functional; it
depends on dataflow.
Post by John Nowak
that you cannot do so, I don't think it is correct to call it "purely
functional". The reason for this is that it does not have the property
that you can replace an expression with the result of evaluating it as
Haskell does (i.e. referential transparency). Such a property is what
enables sensible lazy evaluation.
No, I wouldn't claim that; I just claim that concatenative languages
_can_ be purely (or partially, or not at all) functional, and still
retain full concatenativity. Of course, they lose OTHER advantages.
Post by John Nowak
I'm aware eager languages have their virtues. I did after all say I'd
prefer Disciple Haskell (which is eager and uses an effect system to
control effects instead of monads).
Your effects inference work is very interesting.
Post by John Nowak
Post by John Cowan
Similarly, it has a type system, just not a static type system.
I'll clarify then: You need a static type system to enforce that the
world value is used properly.
The only possible alternative I can think of is to reference count
everything and error if attempting to use the world value when it has
more than one reference to it. Unfortunately, this approach is
unsatisfactory in a number of ways that I could elaborate on if anyone
is interested.
One can also simply make it impossible to duplicate the world by not
putting it on the stack.
Post by John Nowak
- John
-Wm
John Nowak
2009-03-05 22:20:13 UTC
Permalink
Post by William Tanksley
Post by John Nowak
In a concatenative language, one thing you can do as add the 'world'
to the top of the stack and have all operations work below it except
those that perform IO. This does let you sort of hand-wave and claim
the language is purely functional. The problem is that *everything*
now has a forced evaluation order, not just those things that perform
IO. In other words, even though your language is purely functional,
you're still essentially stuck with having to use eager evaluation.
I don't see why this approach forces a totally fixed evaluation order.
It seems to me that it forces a fixed order only for words which don't
"work below the world" (I'm not certain precisely what you mean by
that).
If you're threading the world around on the stack, and every operation
works on a stack with the world on top (but perhaps does not read/
update it), you can't reduce any function until it gets access to the
world. Sure, you can be smart and reduce functions that you know don't
ultimately care about the world, but you can without passing any sort
of world around anyway; simply only allow arbitrary-order reduction
when no functions with side effects will be reduced.

The point of a "pure" language is that you never have to know which
functions have side effects because *none* of them do. Adding a world
and passing it to everything means side effects are *eliminated*, but
you now have no flexibility when reducing programs. If, however, your
language only passes the world to functions that need it, you still
eliminate all side effects but have more flexibility in how you
evaluate the program. It becomes completely unnecessary to worry about
which functions are pure and which aren't because they're all pure. If
their conditions are satisfied, you can reduce.
Post by William Tanksley
A dumb optimizer has to assume that every operation changes the entire
state; a smarter one knows to look at stack effects; a really smart one
knows to look for I/O effects.
Aye, and this is what I would do in a language with effect inference.
The thing to be clear on here is that a language with effect inference
is not pure because it still has effects. Things like "full" lazy
evaluation are not generally possible. Languages with effect inference
like Disciple Haskell are eager and impure.
Post by William Tanksley
No, I wouldn't claim that; I just claim that concatenative languages
_can_ be purely (or partially, or not at all) functional, and still
retain full concatenativity. Of course, they lose OTHER advantages.
Agreed.

- John
Don Groves
2009-03-03 00:19:59 UTC
Permalink
Post by spir
Hello,
New to the list.
A long time ago, I was fascinated by Forth's ability to let one
write programs/words by simply combining lower level words/programs.
This was wonderfully illustrated by Logo's visual version of the
feature.
I intuitively understand "concatenative" as meaning "constructive"
in the sense of "brick programming". This is what I have always
looked for, probably. Still, I do not really get why some languages
may be so qualified; why most languages do not have this property.
I just discover that Forth (which I onlyused very few a long time
ago) is supposed to be a concatenative language. I read the article
on wikipedia, some more related texts and a few threads on this
list. I do not know Joy (will read a bit while waiting for your
replies ;-).
Warning: if you were inspired by Forth, Joy will
change your life!

You will become hooked on its ideas, just as so
many here have before you ;-)
--
don
William Tanksley
2009-03-03 20:00:25 UTC
Permalink
Post by spir
This leads me to wonder whether concatenation is related to
referential transparency. Let us imagine a program which parse tree
I have the impression that the language is concatenative iff we can
(Examples snipped.)
Post by spir
Id est replace words with their definitions.
"A language is concatenative iff one can replace words with their
definitions." I think that's at least CLOSE. You might also have to be
able to replace a definition with the word as well -- it has to run both
ways. You also have to be "persnickety" that you're not allowed to
rename anything during the replacement (i.e. you're not allowed to do
beta reduction or alpha conversion).

Of course, this is not a complete way to evaluate a language; it doesn't
allow for recursion.
Post by spir
This means that a word's definition only relies on the definitions on
the words it references. So what makes most languages "unsafe" in this
aspect? What kind of element in a definition breaks the scheme? My
hypethesis is: definitions may use outer things like environment,
state, variables... In the tree above, this translates to the fact
that several occurrences of, say, z1 do not perform exactly the same
action. For instance, it may not push the same value a stack: then, if
z2 uses this value, it will not have the same result.
Not really. The problem with most languages is that the functions are
defined using local variables, so when you attempt to replace their name
with their definition, you have to FIRST change the definition by
replacing all the instances of each local variable with the value of the
actual parameter.

The fact that some things have state is a problem for lambda expansion,
but it's not even a question when you don't attempt to replace formal
parameters with actual parameters.
Post by spir
What lets me uneasy with this point of view is that "concatenative" is
not supposed to be a synonym of "purely functional". Or what? Are
purely functional languages necessarily, logically, concatenative? But
maybe there are other ways to achieve this property, meaning that
concatenative is a superset of functional?
Nope, they're completely different properties.

A concatenative language is completely associative: you can group words
within it anyhow you'd like, and maintain the same semantics. A pure
functional language is referentially transparent: if two complete
expressions have the same semantics, they can be interchanged without
changing the program semantics. The functional language, however,
requires "complete expressions"; the concatenative language doesn't.
Post by spir
Or am I totally wrong?
Anyway, forth is not qualified as functional. Also, stack based
languages can well use data external to a word definition (outside the
stack). Are they then still concatenative?
Yes. But they're not functional languages -- just concatenative.
Post by spir
-2- the self expression hypothesis ================
Another unusual feature of Forth is its ability to compile itself.
This is related I guess to the property that new words are equally
treated as builtin words. Io has a similar feature and, as I
understand it, a consequence is that it has no keyword in the usual
sense of the term.
To allow such a property, these languages' grammars map directly to a
parse tree (like Lisp, too) without any structural transformation.
Io and Forth work for entirely different reasons. For Io, it has a
textual grammar that's a simple tree, and every message has full access
to the data-structure representation of that tree.

For Forth, the grammar is a flat list (not a tree), and because the
source code is already a flat list, giving functions access to their own
source is sufficient to allow them full access to the data-structure.

Most concatenative languages don't allow that, and even in Forth it's
discouraged. I don't have a good theory to account for what happens when
you let a word freely read and/or modify the source in which it appears;
you can do fun things, but you also lose all guarantees of concatenative
behavior. It makes more sense to me for a language to provide at most a
formal macro facility, so that source rewriting will follow clear rules
and be clearly delimited.
Post by spir
Still, there must some primitives here or there, "under the hood",
else such a language would really be a pure form and a program would
execute without *doing* anything ;-) These primitives are not
expressed in the language itself. There are operational primitives
(such as IO), but also some minimal "meta-linguistic" ones such as in
Correct; a language consists of both syntax and semantics. The syntax of
a concatenative language consists only of a flat list; the semantics
says that the members of that list may be selected from a set of
functions called "basis functions".

Forth -- and any practical language -- has a huge set of basis
functions, and allows you to define your own functions that are used
just like basis functions.

See Kerby's paper for more mathematical detail.
http://tunes.org/~iepos/joy.html

IF YOU DARE. :)
Post by spir
* A number token means "push this number (value) on the stack"
* ':...;' means "parse and store this definition"
* A word token means "recall this word's definiton"
[Expressed with my own words that may not be accurate.]
Right -- Forth has an additional syntax (not purely concatenative) to
allow you to do real work. :-)
Post by spir
Now, my question is: is there any relation between such self-defining
properties and the notion of "concatenativity"?
Yes. Concatenative languages have fully listlike grammar, so every
element of a program can be defined in terms of what it should be
replaced with; in a language with more complex syntax some elements have
to serve syntactic function (the parens in Lisp and Io, the spaces in
Io).
Post by spir
-3- human expressiveness ======================
I wonder about the practical usefulness of the concatenative
propertie(s). The ability of building a program by pure composition is
great. But it seems to lead to great modelizing restrictions, too. I
will illustrate my question with a simple example.
Lets take the "mystery number" game. [Guess a number between eg 1 &
100. The "game master" answers "more" or "less" until found.] As I see
* There are 2 actors.
* They communicate: the player "guesses", the master "answers".
* The master holds a "secret" data item; the player has current "lower" and "upper" bounds.
This example really well maps to OO semantics, sure, I didn't choose
it randomly;-) But I really think, too, that it illustrates a kind of
natural way to express most of non-scientific models. (I.e. models
that are rather a system than a "problem" mapping to a (possibly
tree-like) process.)
There's no reason NOT to use OO modeling in a concatenative language.
The only language type that's incompatible with that is immutable
functional languages.
Post by spir
How would a model of this game expressed in a concatenative language
look like? (I really ask: if ever some of you want to answer with
source code...)
If we're restricted to the same model, it'd look the same way. If we can
choose a different model, it'd look different.
Post by spir
I guess that in a functional language it would lead to conceptual
distortion. Slight, manageable distortion, because the game is so
simple. What I mean is that such langugages are often unable to allow
a direct mapping from a model to source code expression. In other
words: for me, a program should look like (be analog to) what it
pretends to express.
That's only true if the model is OO. Not all models are OO. Don't forget
that OO modeling is also incompatible with relational modeling!
Post by spir
This statement may only reflect a personal bias. Prototype based
languges are, imo, the best metaphor / paradigm as of know, to allow
direct, "natural", expression (In the OO world, most class-based
languages yield on the contrary incredible and highly abstract
complication -- and extreme dependencies!).
I know -- I really like Io for that.
Post by spir
Is there a way to accord concatenation and basic OO? Or are these
approaches -- as I suspect -- definitely incompatible?
Absolutely not incompatible. Factor has a complete OO system; it even
used to be based on prototypes (that was changed because the Factor
maintainers understood class based systems better and didn't want to
maintain a system they didn't understand; in particular they had buggy
delegation). It still has its prototype object system in a side library
(no longer in the core); perhaps a person knowledgeable in delegation
could fix it.

At any rate, there's no reason whatsoever for concatenative languages to
have any problem with object oriented modeling and/or programming.
Post by spir
Denis
-Wm
John Nowak
2009-03-03 20:46:09 UTC
Permalink
Post by William Tanksley
Post by spir
Id est replace words with their definitions.
"A language is concatenative iff one can replace words with their
definitions." I think that's at least CLOSE. You might also have to be
able to replace a definition with the word as well -- it has to run both
ways. You also have to be "persnickety" that you're not allowed to
rename anything during the replacement (i.e. you're not allowed to do
beta reduction or alpha conversion).
You make it sound like Haskell might qualify. You can certainly
replace words with their definitions in Haskell. Replacing expressions
with the word that names them is possible as well for any "valid sub-
program". For example, I can take '(\x -> foo x 10) y' and convert it
to 'bar y' if 'bar' is defined as '\x -> foo x 10'. I could not,
however, do something like replace 'foo x 10' with something because
it is not a valid sub-program (as 'x' is not introduced anywhere).

I think the best definition of a concatenative language is as follows:

"All terms denote functions and their juxtaposition denotes
composition."

Everything else follows from this. It also makes it clear why Haskell
doesn't qualify.

- John
pml060912
2009-03-05 05:49:12 UTC
Permalink
--- In ***@yahoogroups.com, William Tanksley <***@...> wrote:
.
.
.
Post by William Tanksley
Forth -- and any practical language -- has a huge set of basis
functions, and allows you to define your own functions that are used
just like basis functions.
No, actually, it has a rather small set of basis functions, with a number of others often provided in advance for convenience to save defining them later. The latter are often set up in low level code for efficiency, but in principle they could be set up as high level extensions and/or loaded later. However, some implementations deliberately go for a very minimal basis set, ones like eforth and hforth aimed at getting up and running on platforms without much in the way of a software base to help install new stuff. These typically have fewer than 30 words defined at low level (and you could get away with even fewer), with just a few more high level ones in the source that could be loaded later instead. Regardless, a huge basis is not in keeping with the typical Forth philosophy; nearly all Forths do NOT have huge amounts set up in advance, and the stuff that is is just a matter of convenience. P.M.Lawrence.
William Tanksley
2009-03-05 15:18:57 UTC
Permalink
Post by William Tanksley
Forth -- and any practical language -- has a huge set of basis
functions, and allows you to define your own functions that are used
just like basis functions.
No, actually, it has a rather small set of basis functions.
Probably we have different definitions for the terms "huge" and "basis
functions".

I use the term "basis function" in the formal, mathematical sense
(appropriate for the "functional language" context of the discussion),
rather than as a synonym for "primitive" or "implemented in machine
code". In that sense, a "huge" set of basis functions is when a language
requires far more than the basis functions required in a mathematically
minimal basis set for that type of language.

I don't think it's useful to treat "basis functions" as a synonym for
"primitives".

-Wm
pml060912
2009-03-06 01:21:43 UTC
Permalink
Post by William Tanksley
Post by William Tanksley
Forth -- and any practical language -- has a huge set of basis
functions, and allows you to define your own functions that are used
just like basis functions.
No, actually, it has a rather small set of basis functions.
Probably we have different definitions for the terms "huge" and "basis
functions".
I use the term "basis function" in the formal, mathematical sense
(appropriate for the "functional language" context of the discussion),
rather than as a synonym for "primitive" or "implemented in machine
code". In that sense, a "huge" set of basis functions is when a language
requires far more than the basis functions required in a mathematically
minimal basis set for that type of language.
I don't think it's useful to treat "basis functions" as a synonym for
"primitives".
I wasn't doing so, I was using it as "the set of words (functions) needed to build up the rest". In fact, in a given implementation, there will probably be many of the others implemented as machine code primitives for efficiency reasons, so that set could well be greater. Forth does NOT have or need a huge basis but a rather small one. In the early days that gave a big plus, being able to install Forth easily without the aid of much in the way of software tools for the platform; you only had a little of the awkward work to do before you had a working Forth to help you do the further (application?) work that you were there for in the first place. Eforth in particular only has about 30 words in its basis, and it could probably be cut down further though it wouldn't be worth it. It already implements multiplication through looping and addition (with the expectation that a working system would soon be used to upgrade that and similar inefficient things). I once heard of an implementation called picoforth that I can't track down, that built up stack manipulation words like DROP, DUP and SWAP using temporary/garbage variables and the memory access words @ and !.

All this demonstrates that Forth has a small basis. There is room to argue whether the words used in the text interpreter should also count as part of it, but even there you don't need many. Again, look at eforth for a case in point. P.M.Lawrence.
spir
2009-03-05 11:52:32 UTC
Permalink
Le Mon, 2 Mar 2009 21:19:32 +0100,
Post by spir
Hello,
New to the list.
[...]
<< The mainstream imperative languages have a state of associations between assignable variables and their current values. The values are changed by assignments during the run of the program. These languages also have an environment of formal / actual parameter associations that are set up by calls of defined functions or procedures. Purely functional languages have no state, but at least the mainstream applicative functional languages invariably have an environment. Compositional functional languages and hence concatenative functional languages have no state and need no environment. The concatenative language Joy has neither. >>

I actually found the whole FAQ page really enlightening. It's not a FAQ, indeed, rather a progressive introduction to the notion of composition in PLs. And the progression is, I guess, pedagogically good even without CS background. There may be some improvements: eg now I cannot make any difference between "concatenative" and "compositional" ;-) Some transistions are still a bit harsh, too.
What about asking the author to publish this text on a wiki and/or let it as an introduction (modified or as is) on the list's home site?

denis
------
la vita e estrany
William Tanksley
2009-03-05 15:38:30 UTC
Permalink
Post by spir
good even without CS background. There may be some improvements: eg
now I cannot make any difference between "concatenative" and
"compositional" ;-)
The term Compositional has been used before to describe concatenative
languages. By the FAQ's definition, all concatenative languages are
compositional, but not vice versa (Haskell has a subset that's
compositional by that definition, but isn't concatenative because the
composition operator is explicit).

There are three reasons why I don't recommend using the term
compositional outside of pedagogy.

First, it's a very theoretical and pure definition: it requires that the
ONLY operation be composition. There's not much you can sneak past that.
We've since developed definitions of concatenativity that seem to be a
lot more flexible, allowing us to have a concept of pure versus impure
concatenative languages. This also implies that we can define a
concatenative language that's not compositional.

Second, there are no studied benefits to compositional languages that
aren't also present in concatenative languages, and concatenative
languages have the huge benefit of token associativity that's not
present in compositional languages with explicit composition operators.

Third, the term is already in use: a compositional language is one whose
facilities are mainly intended to help the programmer tie applications
written in other languages together.
Post by spir
denis
-Wm
John Nowak
2009-03-05 21:54:38 UTC
Permalink
Post by William Tanksley
There are three reasons why I don't recommend using the term
compositional outside of pedagogy.
First, it's a very theoretical and pure definition: it requires that the
ONLY operation be composition.
Really? Does quotation not violate this?
Post by William Tanksley
We've since developed definitions of concatenativity that seem to be a
lot more flexible, allowing us to have a concept of pure versus impure
concatenative languages. This also implies that we can define a
concatenative language that's not compositional.
I disagree. According to the Wikipedia definition:

"A concatenative programming language is one in which all terms
denote functions and the juxtaposition of functions denotes
function composition."

Now you might say, "But John, you wrote the Wikipedia definition,"
upon which time I'd stick my fingers in my ears and start making "nana-
nana-nana" noises.
Post by William Tanksley
Second, there are no studied benefits to compositional languages that
aren't also present in concatenative languages,
I'd say FP and FL are compositional if anything is, and their benefits
have certainly been the topic of numerous research papers. It was all
the rage at one point. Certainly the Bird-Meertens formalism fits in
here somewhere as well.
Post by William Tanksley
and concatenative languages have the huge benefit of token
associativity that's not present in compositional languages with
explicit composition operators.
Eh? '(f . g) . h == f . (g . h)'; seems to work to me.

- John
William Tanksley
2009-03-05 23:16:55 UTC
Permalink
Post by John Nowak
Post by William Tanksley
There are three reasons why I don't recommend using the term
compositional outside of pedagogy.
First, it's a very theoretical and pure definition: it requires that
the ONLY operation be composition.
Really? Does quotation not violate this?
That's fair to ask... I'm not sure whether quotation is an "operation"
as such. I think it's a very academic question, since I think
"compositional" is of only pedagogical value.
Post by John Nowak
Post by William Tanksley
We've since developed definitions of concatenativity that seem to be a
lot more flexible, allowing us to have a concept of pure versus impure
concatenative languages. This also implies that we can define a
concatenative language that's not compositional.
"A concatenative programming language is one in which all terms
denote functions and the juxtaposition of functions denotes
function composition."
I think it's a good enough definition :-). Its only problem is that,
like the first paragraph of many Wikipedia entries, it's a little
absolutist. Perhaps removing "all" might be interesting -- later on we
can talk about concatenative purity. Or perhaps we should steal text
from the Functional_programming page.

"In Computer_science, functional programming is a Programming_paradigm
that treats Computation as a series of sequential transformations on a
single compound data item, often involving a stack. It emphasizes the
composition of operations, in contrast to the Functional_programming
style, which emphasises the application of functions to data. There have
been many practical programming languages which are now known to be
(impurely) concatenative, but which were not designed with any such goal
in mind; those languages are known as Stack_languages."

Bla bla bla. That was fun, but probably was of little value. One thing I
like about it was that I didn't assume that we were discussing a
_functional_ compositional language -- I never really noticed we were
doing that before. We can then discuss how "operations" can be
considered as functions from a world-state to a world-state, and how a
purely functional concatenative language operates with pure functions
from a stack to a stack, and how those functions are related to the
combinators of combinatorial logic.
Post by John Nowak
Post by William Tanksley
and concatenative languages have the huge benefit of token
associativity that's not present in compositional languages with
explicit composition operators.
Eh? '(f . g) . h == f . (g . h)'; seems to work to me.
Yes, but does '(f . g) . h == (f .) g . h'? The operators are tokens
too, now that they're explicit. They're associative _operators_, but the
syntax itself is no longer associative.
Post by John Nowak
- John
-Wm
John Nowak
2009-03-06 00:12:57 UTC
Permalink
Post by William Tanksley
Post by John Nowak
"A concatenative programming language is one in which all terms
denote functions and the juxtaposition of functions denotes
function composition."
I think it's a good enough definition :-). Its only problem is that,
like the first paragraph of many Wikipedia entries, it's a little
absolutist.
I think that, given the choice, we should be overly specific rather
than overly general. In the former case, interesting work on the
boundaries will still get done, and we'll have a precise definition to
keep us focused. In the latter case, we get this same discussion every
month.

Truth be told, the amount of good research into concatenative
languages is nearly non-existent. We don't have much in the way of
significant results yet. Getting the definition out of the way seems
to me like a precondition for talking about what we're doing.

Obviously, there's nothing that says the definition could not expand
later. Words change meaning over time. I think it is better though to
avoid preemptively expanding definitions; instead, let's come up with
interesting things that don't fit the definition and see if they have
similar properties to things that do. Only this approach, applied
consistently, is going to help us arrive at an "optimal" definition
that describes an interesting set of languages with similar properties.

Just my opinion of course.
Post by William Tanksley
...I didn't assume that we were discussing a _functional_
compositional
language -- I never really noticed we were doing that before.
We've been doing it because it is useful to do so. Once you add
mutation or IO into the mix, the algebra of things becomes
significantly more complicated. To apply the approach described above,
I think we should first focus our efforts on detailing the properties
of purely functional concatenative languages if for no other reason
than it being simpler to do so. After we've made progress along these
lines, dealing with an impure language will be that much easier.
Post by William Tanksley
Post by John Nowak
Eh? '(f . g) . h == f . (g . h)'; seems to work to me.
Yes, but does '(f . g) . h == (f .) g . h'?
Heh, yes actually. 'f . g == (f .) g'. The reason is that '.' is a
function. If we were to write it prefix, the reason for this is more
clear: '(.) f g == ((.) f) g'. In a language with curried functions
like Haskell, the former is really just a more concise way of saying
the latter. They're functionally identical.

I realize your point though, and I understand that a good notation has
value. That said, I don't see much to indicate that using an explicit
composition operator is a "less good" notation than using
juxtaposition. The language I'm working on now uses explicit
composition; this frees up juxtaposition to be used for other things
(in my case, meta-application, aka the application of a combining form
to a function or tuple of functions).

The current definition on Wikipedia claims that it must be
juxtaposition denoting composition. Accordingly, my current project
would not quality. That's fine with me as it doesn't exist yet. When I
finish it, maybe I'll push the issue should it have the same
properties as a concatenative language minus the minor notational
difference.

In short, I'm going along with the "juxtaposition is composition"
definition because we don't have any existing counter-examples. That
said, we also have no existing concatenative languages that do not use
a stack. Accordingly, I've been thinking lately that it may be best to
go ahead and consider the stack (or some sort of stack/queue tuple)
part of the definition. Slava would be happy at least.

I'll back up the claim that perhaps the stack should be part of the
definition with the example of FP. FP and Joy are pretty similar on
paper. You can go ahead and denote composition in FP with
juxtaposition if you'd like. However, the simple difference between
using always using a stack and using a list only when necessary is
huge. The languages feel completely different from one another, and
indeed the latter doesn't "feel" concatenative at all as we've both
previously agreed on.

Maybe a precise definition could look like this:

1. All terms denote functions.
2. All functions are unary functions from a some aggregate (e.g. a
stack or stack/queue tuple) to an aggregate of the same type.
3. Juxtaposition denotes the composition of functions.

My problem with this definition is #2; it's ugly. I'll pose a question
that may lead to a way of being more precise in a separate email.

- John
spir
2009-03-06 08:19:42 UTC
Permalink
Le Thu, 05 Mar 2009 15:16:55 -0800,
Post by William Tanksley
Post by John Nowak
Really? Does quotation not violate this?
That's fair to ask... I'm not sure whether quotation is an "operation"
as such. I think it's a very academic question, since I think
"compositional" is of only pedagogical value.
I would say that quotation is one aspect, or level, or kind, of meta-programming. It's taking a bit of program as object of the programitself, meaning as data. Like meta-linguistics [thus the term 'quotation' indeed (note that that is a quotation, and this is meta-linguistic discourse, too, and... ;-)]:

<<"'Joke' has 3 phonemes and 4 letters.", she said.>>
Post by William Tanksley
From this point of view, quotations, or similar tools, are not the same as plain programming where the raw materials are 'real' data only. And is afaik pretty rare in practice outside the field of stack-based/concatenative languages.
Can a concatenative language live without quotations? In joy quotations are the materials of combinators, hence the answer seems to be no.

There is something I do not understand. It seems that literal code and word names inside quoting marks are both called quotations:

[1 2 3] [dup *] map

:square dup *
[1 2 3] [square] map

Isn't there a worthful distinction here? The second version seems analog to first-class funcs. The first one looks a bit like lambdas, no ? Or is it really taking code "flesh" as data? Or code as data only really happens when one can tweaks its guts like in Lisp?
Could a language like Joy live with only the second form? Wouldn't it even be good practice (--> factorization).

denis
------
la vita e estrany
John Nowak
2009-03-06 08:32:30 UTC
Permalink
Post by spir
I would say that quotation is one aspect, or level, or kind, of meta-
programming.
They're just a way of pushing first-class functions onto a stack.
Post by spir
Can a concatenative language live without quotations?
Yes: Forth.
See also: http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-flatjoy.html
Post by spir
There is something I do not understand. It seems that literal code
[1 2 3] [dup *] map
:square dup *
[1 2 3] [square] map
Isn't there a worthful distinction here?
No. [dup *] and [square] both push a first-class function onto the
stack. If you do not allow functions to access things inside a
quotation (as in Cat), '[dup *]' and '[square]' are the same function.
Post by spir
The first one looks a bit like lambdas, no ?
No. There's no form of abstraction involved.

- John
John Nowak
2009-03-06 08:36:53 UTC
Permalink
Post by John Nowak
Post by spir
The first one looks a bit like lambdas, no ?
No. There's no form of abstraction involved.
But -- they are both used to form first-class functions.

- John
Don Groves
2009-03-06 23:11:12 UTC
Permalink
Post by spir
Le Thu, 05 Mar 2009 15:16:55 -0800,
Post by William Tanksley
Post by John Nowak
Really? Does quotation not violate this?
That's fair to ask... I'm not sure whether quotation is an
"operation"
as such. I think it's a very academic question, since I think
"compositional" is of only pedagogical value.
I would say that quotation is one aspect, or level, or kind, of meta-
programming. It's taking a bit of program as object of the
programitself, meaning as data. Like meta-linguistics [thus the term
'quotation' indeed (note that that is a quotation, and this is meta-
<<"'Joke' has 3 phonemes and 4 letters.", she said.>>
From this point of view, quotations, or similar tools, are not the
same as plain programming where the raw materials are 'real' data
only. And is afaik pretty rare in practice outside the field of
stack-based/concatenative languages.
Can a concatenative language live without quotations? In joy
quotations are the materials of combinators, hence the answer seems
to be no.
There is something I do not understand. It seems that literal code
[1 2 3] [dup *] map
:square dup *
[1 2 3] [square] map
Isn't there a worthful distinction here? The second version seems
analog to first-class funcs. The first one looks a bit like lambdas,
no ? Or is it really taking code "flesh" as data? Or code as data
only really happens when one can tweaks its guts like in Lisp?
You may get disagreement on this point from others here but
not from me. As in Lisp, you can use an anonymous function,
like [dup *], or you can name it and refer to it by name and get
the identical result. So, yes, I consider [dup *] to be a lambda.

Also, there is nothing stopping a concatenative programmer
from "tweaking the guts" just as in Lisp, because anything that
can be done in a source file, or via interactive input, can also
be done from inside a running program.

Maybe we should call them "juxtaposed Lisps." ;-)
--
don
Post by spir
Could a language like Joy live with only the second form? Wouldn't
it even be good practice (--> factorization).
denis
------
la vita e estrany
------------------------------------
Yahoo! Groups Links
John Nowak
2009-03-06 23:24:13 UTC
Permalink
Post by Don Groves
Post by spir
[1 2 3] [dup *] map
:square dup *
[1 2 3] [square] map
You may get disagreement on this point from others here but
not from me. As in Lisp, you can use an anonymous function,
like [dup *], or you can name it and refer to it by name and get
the identical result. So, yes, I consider [dup *] to be a lambda.
It's not a lambda. Quotation and lambdas have a similar use, but
they're not the same thing.

Your point about naming it and being able to refer to it is confused.
Even if you define 'square', you still need to use quotation to form a
new function that pushes the 'square' onto the stack. It is not the
same as giving the name for a function in Scheme and getting that
function back as a result. You're explicitly creating a new function.

- John
Christopher Diggins
2009-03-07 00:39:25 UTC
Permalink
Post by John Nowak
Post by Don Groves
Post by spir
[1 2 3] [dup *] map
:square dup *
[1 2 3] [square] map
You may get disagreement on this point from others here but
not from me. As in Lisp, you can use an anonymous function,
like [dup *], or you can name it and refer to it by name and get
the identical result. So, yes, I consider [dup *] to be a lambda.
It's not a lambda. Quotation and lambdas have a similar use, but
they're not the same thing.
In the context of Joy and Cat a quotation and a lambda is equivalent.
They are both abstraction operations over the contained term (i.e.
function that corresponds to that term). When you evaluate an
abstracted function, you get the function. This is basic lambda
calculus.

- Christopher Diggins
John Nowak
2009-03-07 01:21:49 UTC
Permalink
Post by Christopher Diggins
Post by John Nowak
It's not a lambda. Quotation and lambdas have a similar use, but
they're not the same thing.
In the context of Joy and Cat a quotation and a lambda is equivalent.
Equivalent in what sense? In the sense that they're used for similar
reasons? That would be what I said. If not, what sort of equivalency
are you talking about?
Post by Christopher Diggins
When you evaluate an abstracted function, you get the function. This
is basic lambda calculus.
Except quotation does not have the variables of lambda calculus. There
are no lambdas in a concatenative language. What we can do is describe
quotation in terms of lambda calculus:

\f. \s. cons f s

In other words, when quotation is applied to some function 'f', the
result is a new function that, when applied to some stack 's', returns
the result of consing 'f' onto 's'. This is all quotation is; nothing
more, nothing less.

Claiming that quotation is "equivalent" to a lambda abstraction
doesn't make a whole lot of sense with this understanding. The only
thing that they have in common is that they're both used to create new
functions.

- John
John Nowak
2009-03-07 01:32:56 UTC
Permalink
Post by John Nowak
Claiming that quotation is "equivalent" to a lambda abstraction
doesn't make a whole lot of sense with this understanding. The only
thing that they have in common is that they're both used to create new
functions.
Let me clarify: Quotation is just a higher order function. When you
apply it to a function, you get a new function. This is a very simple
and straightforward operation.

Lambda abstractions are not functions; they're much much complicated.
It's true that lambda expressions *evaluate* to functions, and that's
where the confusion comes in. The way they create that new function
however is not at all equivalent to what quotation is doing.

- John
Christopher Diggins
2009-03-07 02:13:27 UTC
Permalink
Post by John Nowak
Let me clarify: Quotation is just a higher order function. When you
apply it to a function, you get a new function. This is a very simple
and straightforward operation.
You don't apply quotation to a function. You quote a term (i.e. an
expression). This creates a new term. When a quoted term is evaluated
it creates a function.

A quotation, like abstraction, is part of the language semantics
(a.k.a. abstract syntax). A higher-order function takes a function as
input and/or returns a function as output at run-time.

The only difference between lambda abstractions and quotations is that
lambda abstractions allow (but do not require) the usage of named
variables. In a quotation, there is always exactly one implied
variable, a stack (at least in Joy and Cat).

- Christopher
John Nowak
2009-03-07 02:33:25 UTC
Permalink
Post by Christopher Diggins
Post by John Nowak
Let me clarify: Quotation is just a higher order function. When you
apply it to a function, you get a new function. This is a very simple
and straightforward operation.
You don't apply quotation to a function. You quote a term (i.e. an
expression). This creates a new term. When a quoted term is evaluated
it creates a function.
There's no reason you have to view it that way. Viewing it as a simple
function gives you a few clear way of understanding its semantics.
Post by Christopher Diggins
A higher-order function takes a function as input and/or returns a
function as output at run-time.
I never was taught in math class that functions only get run at run-
time. You're talking about an implementation detail. If you have a
higher order function applied to a constant argument, you can reduce
it whenever you like.
Post by Christopher Diggins
The only difference between lambda abstractions and quotations is
that lambda abstractions allow (but do not require) the usage of
named variables.
Lambda abstractions *do* require the usage of named variables. That's
what lambda does -- introduce variables. You seem to be confusing
lambda calculus with Lisp.

Lambda is not a function. Quotation, on the other hand, is
understandable as a function. This is very much a good thing and not
something to be lost or confused by equating it with lambda. What
quotation does is *much* simpler than what lambda does.
Post by Christopher Diggins
In a quotation, there is always exactly one implied variable, a
stack (at least in Joy and Cat).
That's not an implied variable. I'm not even sure what an "implied
variable" would be. There's either a variable present that is
eliminated via substitution as part of an application or there isn't.
It's true that you can express the semantics of a function in a
concatenative language using lambda calculus, but that's just
equivalent to saying that lambda calculus is turing complete.

Quotation does not introduce any variables, "implied" or otherwise.

- John
spir
2009-03-07 08:51:02 UTC
Permalink
Le Fri, 6 Mar 2009 21:13:27 -0500,
Post by Christopher Diggins
The only difference between lambda abstractions and quotations is that
lambda abstractions allow (but do not require) the usage of named
variables. In a quotation, there is always exactly one implied
variable, a stack (at least in Joy and Cat).
Maybe we'd better call this this "anonymous word/func".
I still think, at least conceptually, [square] is very different from [dup *] even if they end up performing the same.

* [square] recalls something already defined.
* [dup *] defines a new, here anonymous, function.

Anyway, I understand that syntactically speaking there is no difference between [square] and eg [dup square square +]: the former beeing an anonymous func that simply happens to be 1 term long. (Correct?) But I find this a pity. One should be able to recall definitions.

[Side note: The point is similar for me to variables in applicative languages. Many allow or require first declaration/definition/initialization of a new name. In other languages (eg python), there is no way in
n = 1
to know whether:
* the programmer's *intention* is to introduce a new name:value pair or rebind an existing name to a new value
* the interpreter will create a new entry in the local namespace's dict, and give it an initial value, or update the value of an existing key:value pair.]
This also leads to syntactic scope issue like need of local/global free var/attribute explicite distinctions.

Denis
------
la vita e estrany
John Nowak
2009-03-07 09:15:21 UTC
Permalink
Post by spir
I find this a pity. One should be able to recall definitions.
Why? It would be a useless duplication of the functionality provided
by quotation.
Post by spir
[Side note: The point is similar for me to variables in applicative
languages. Many allow or require first declaration/definition/
initialization of a new name. In other languages (eg python), there
is no way in
n = 1
* the programmer's *intention* is to introduce a new name:value pair
or rebind an existing name to a new value
* the interpreter will create a new entry in the local namespace's
dict, and give it an initial value, or update the value of an
existing key:value pair.]
Aye. This is why you should not use the same syntax for introducing a
new variable and updating a variable. I'm not quite sure though what
Python's design misfeatures have to do with quotation.

- John
spir
2009-03-07 10:48:49 UTC
Permalink
Le Sat, 7 Mar 2009 04:15:21 -0500,
Post by John Nowak
Aye. This is why you should not use the same syntax for introducing a
new variable and updating a variable. I'm not quite sure though what
Python's design misfeatures have to do with quotation.
- John
At least, in both cases the programmer cannot express his *intention* to reuse an already defined thing. I pretend, but may be wrong, that a programmer who writes "[square]" means "use square's definition", not "create a new func that simply happens to have a single term, namely 'square'". In other words, the programmer probably intends to refer to whatever is called "square" -- not to build a new unnamed func. Again, I maybe wrong.

denis
------
la vita e estrany
John Nowak
2009-03-07 11:18:23 UTC
Permalink
Post by spir
Le Sat, 7 Mar 2009 04:15:21 -0500,
Post by John Nowak
Aye. This is why you should not use the same syntax for introducing a
new variable and updating a variable. I'm not quite sure though what
Python's design misfeatures have to do with quotation.
- John
At least, in both cases the programmer cannot express his
*intention* to reuse an already defined thing. I pretend, but may be
wrong, that a programmer who writes "[square]" means "use square's
definition", not "create a new func that simply happens to have a
single term, namely 'square'". In other words, the programmer
probably intends to refer to whatever is called "square" -- not to
build a new unnamed func. Again, I maybe wrong.
You're wrong. Not to be overly blunt, but I'd encourage you to use
these languages a bit before guessing at these sort of things.

- John

spir
2009-03-07 08:32:16 UTC
Permalink
Le Fri, 6 Mar 2009 18:24:13 -0500,
Post by John Nowak
Your point about naming it and being able to refer to it is confused.
Even if you define 'square', you still need to use quotation to form a
new function that pushes the 'square' onto the stack. It is not the
same as giving the name for a function in Scheme and getting that
function back as a result. You're explicitly creating a new function.
Isn't this issue implementation choice? What I mean is at the conceptual/programmer level.

* There must be a way to distinguish a reference to a word from running it (think eg at python "func" vs "func()" or Io "getSlot("meth")" vs "meth"). Why is it wrong to consider that "[square]" vs "square" plays this role?

* As I see it, the implementor has numerous options:
1. Create a new func as explained above by John.
2. Literally replace in code (~ macro) before processing (then we get an anonymous func again).
[square] map ==> [dup *] map
3. Store a reference (name, pointer, id) on stack when "[square]" is parsed.
4. Store "[square]" as data on stack.
5. Store square's definition on stack.
All would do the job?

Denis
------
la vita e estrany
John Nowak
2009-03-07 08:38:13 UTC
Permalink
Post by spir
Le Fri, 6 Mar 2009 18:24:13 -0500,
Post by John Nowak
Your point about naming it and being able to refer to it is confused.
Even if you define 'square', you still need to use quotation to form a
new function that pushes the 'square' onto the stack. It is not the
same as giving the name for a function in Scheme and getting that
function back as a result. You're explicitly creating a new function.
Isn't this issue implementation choice?
I'm talking about the semantics of the language, not the implementation.

- John
Loading...