Discussion:
Must have SINGLE input/Arg.
chris glur crglur@gmail.com [concatenative]
2014-08-01 17:57:42 UTC
Permalink
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
it was obvious that the following 'schematic' does it:
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.


Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.


== Chris Glur.
'John Nilsson' john@milsson.nu [concatenative]
2014-08-11 19:17:13 UTC
Permalink
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C


When in practice they should all be equal.




My thinking is that this could be addressed by just making them equal so that




(D -> B) (A,B -> C) = (A,D -> C)





Now this gets a little problematic if we have (A,A -> A)




So I suggest that we add bindings to the type and se argument lists not as ordered tuples, but more like first class environments.




Thus the type above would be (x::A,y::A -> z::A) and composition would work as the ABCD case above.





To allow composition with arbitrary name we can add a rename type so that





(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)





This would also mean that it is an environment with bindings that gets threaded through the program, and not a stack.




BR,

John
—
Sent from Mailbox
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
John Meacham john@repetae.net [concatenative]
2014-08-11 19:39:35 UTC
Permalink
However, they actually are not isomorphic and behave differently during
beta reduction, due to being able to perform shared computation in between
the passing of the arguments. for instance

f = \x -> let bx = x^10 in \y -> bx + y

when applied to just one argument will share the computation of calculating
the tenth power with all of its uses.

When working with a lazy language you also have the fact that a tuple has
another value, namely the tuple itself being bottom in addition to each of
its components being bottom which means tupled arguments are different than
fully applied curried ones as they can take on more values.

These are all good and useful things, being able to control sharing and
evaluation via proper use of currying is a very powerful tool for many
languages.

John
Post by 'John Nilsson' ***@milsson.nu [concatenative]
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C
When in practice they should all be equal.
My thinking is that this could be addressed by just making them equal so that
(D -> B) (A,B -> C) = (A,D -> C)
Now this gets a little problematic if we have (A,A -> A)
So I suggest that we add bindings to the type and se argument lists not as
ordered tuples, but more like first class environments.
Thus the type above would be (x::A,y::A -> z::A) and composition would
work as the ABCD case above.
To allow composition with arbitrary name we can add a rename type so that
(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)
This would also mean that it is an environment with bindings that gets
threaded through the program, and not a stack.
BR,
John
--
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
--
John Meacham - http://notanumber.net/
John Nilsson john@milsson.nu [concatenative]
2014-08-12 00:26:20 UTC
Permalink
I was thinking that composition would also be partial evaluation. So the
benefits of curried functions should be the same.

Regarding evaluation I'm curious if there is something interesting to be
found by looking at kappa calculus or similar first order system instead of
full lambda calculus. Or in any case, aiming for a total language, so no
bottom. Looking around recent papers from various researches it seems to me
that one could get quite far that route. Possibly one has to introduce some
disciplined way to do higher order things sooner or later though.

BR,
John
Post by John Meacham ***@repetae.net [concatenative]
However, they actually are not isomorphic and behave differently during
beta reduction, due to being able to perform shared computation in between
the passing of the arguments. for instance
f = \x -> let bx = x^10 in \y -> bx + y
when applied to just one argument will share the computation of
calculating the tenth power with all of its uses.
When working with a lazy language you also have the fact that a tuple has
another value, namely the tuple itself being bottom in addition to each of
its components being bottom which means tupled arguments are different than
fully applied curried ones as they can take on more values.
These are all good and useful things, being able to control sharing and
evaluation via proper use of currying is a very powerful tool for many
languages.
John
Post by 'John Nilsson' ***@milsson.nu [concatenative]
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C
When in practice they should all be equal.
My thinking is that this could be addressed by just making them equal so that
(D -> B) (A,B -> C) = (A,D -> C)
Now this gets a little problematic if we have (A,A -> A)
So I suggest that we add bindings to the type and se argument lists not
as ordered tuples, but more like first class environments.
Thus the type above would be (x::A,y::A -> z::A) and composition would
work as the ABCD case above.
To allow composition with arbitrary name we can add a rename type so that
(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)
This would also mean that it is an environment with bindings that gets
threaded through the program, and not a stack.
BR,
John
—
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
--
John Meacham - http://notanumber.net/
John Meacham john@repetae.net [concatenative]
2014-08-12 03:16:17 UTC
Permalink
An issue with no bottom is that having it is necessary to properly reason
about infinite data structures. Now, as to whether you actually want
infinite data structures in your language is another issue, but with my
Haskell background I gotta say they are darn handy. :)

I actually have a concatinative lazy language I wrote called 'levity' (a
play on joy) for use internally by a project a while ago. It had some
interesting properties, I should write up the core as it may be interesting
to other fans of concatinative languages even if the code itself is
probably not reusable.

Among other things, it had its symbols in compositional order and was list
based instead of stack based, it just happened that functions implicitly
acted upon the 'current list'. 'f g h' means f(g(h(current list))).. But
by having it in compositional order, something like (1 2 3) actually ended
up being a list in the order 1, 2, 3, and (1 2 2 1 +) ended up also being
(1 2 3). (..) was just syntactic sugar for 'run the symbols between the
parens on an empty stack and take the resulting stack as your new list.

Due to where I was using the language, as part of a strategy specification
system for a theorem prover, I was able to actually keep it fully pure in
the mathematical sense so laziness was a no-brainer without side effects to
worry about.

John
Post by John Nilsson ***@milsson.nu [concatenative]
I was thinking that composition would also be partial evaluation. So the
benefits of curried functions should be the same.
Regarding evaluation I'm curious if there is something interesting to be
found by looking at kappa calculus or similar first order system instead of
full lambda calculus. Or in any case, aiming for a total language, so no
bottom. Looking around recent papers from various researches it seems to me
that one could get quite far that route. Possibly one has to introduce some
disciplined way to do higher order things sooner or later though.
BR,
John
Post by John Meacham ***@repetae.net [concatenative]
However, they actually are not isomorphic and behave differently during
beta reduction, due to being able to perform shared computation in between
the passing of the arguments. for instance
f = \x -> let bx = x^10 in \y -> bx + y
when applied to just one argument will share the computation of
calculating the tenth power with all of its uses.
When working with a lazy language you also have the fact that a tuple has
another value, namely the tuple itself being bottom in addition to each of
its components being bottom which means tupled arguments are different than
fully applied curried ones as they can take on more values.
These are all good and useful things, being able to control sharing and
evaluation via proper use of currying is a very powerful tool for many
languages.
John
Post by 'John Nilsson' ***@milsson.nu [concatenative]
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C
When in practice they should all be equal.
My thinking is that this could be addressed by just making them equal so that
(D -> B) (A,B -> C) = (A,D -> C)
Now this gets a little problematic if we have (A,A -> A)
So I suggest that we add bindings to the type and se argument lists not
as ordered tuples, but more like first class environments.
Thus the type above would be (x::A,y::A -> z::A) and composition would
work as the ABCD case above.
To allow composition with arbitrary name we can add a rename type so that
(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)
This would also mean that it is an environment with bindings that gets
threaded through the program, and not a stack.
BR,
John
--
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
--
John Meacham - http://notanumber.net/
--
John Meacham - http://notanumber.net/
John Nowak john@johnnowak.com [concatenative]
2014-08-12 15:01:31 UTC
Permalink
An issue with no bottom is that having it is necessary to properly reason about infinite data structures.
I don't think this is true. It's possible to define and formally reason about infinite data structures in Coq by defining codata, and Coq has no bottom. You might want to look at Turner's "Total Functional Programming" paper.

- jn
'John Nilsson' john@milsson.nu [concatenative]
2014-08-13 00:07:09 UTC
Permalink
Im not sure that is the case. As I understand it it should be possible even for a total language to reason about infinite structures.


Here's a paper on copatterns http://www.cs.mcgill.ca/~bpientka/papers/icfp13.pdf on termination and productivity of those.







Does lazy exclude total btw?







If you do publish something about that language, it would be interesting to read :)




BR,

John
—
Sent from Mailbox
Post by John Meacham ***@repetae.net [concatenative]
An issue with no bottom is that having it is necessary to properly reason
about infinite data structures. Now, as to whether you actually want
infinite data structures in your language is another issue, but with my
Haskell background I gotta say they are darn handy. :)
I actually have a concatinative lazy language I wrote called 'levity' (a
play on joy) for use internally by a project a while ago. It had some
interesting properties, I should write up the core as it may be interesting
to other fans of concatinative languages even if the code itself is
probably not reusable.
Among other things, it had its symbols in compositional order and was list
based instead of stack based, it just happened that functions implicitly
acted upon the 'current list'. 'f g h' means f(g(h(current list))).. But
by having it in compositional order, something like (1 2 3) actually ended
up being a list in the order 1, 2, 3, and (1 2 2 1 +) ended up also being
(1 2 3). (..) was just syntactic sugar for 'run the symbols between the
parens on an empty stack and take the resulting stack as your new list.
Due to where I was using the language, as part of a strategy specification
system for a theorem prover, I was able to actually keep it fully pure in
the mathematical sense so laziness was a no-brainer without side effects to
worry about.
John
Post by John Nilsson ***@milsson.nu [concatenative]
I was thinking that composition would also be partial evaluation. So the
benefits of curried functions should be the same.
Regarding evaluation I'm curious if there is something interesting to be
found by looking at kappa calculus or similar first order system instead of
full lambda calculus. Or in any case, aiming for a total language, so no
bottom. Looking around recent papers from various researches it seems to me
that one could get quite far that route. Possibly one has to introduce some
disciplined way to do higher order things sooner or later though.
BR,
John
Post by John Meacham ***@repetae.net [concatenative]
However, they actually are not isomorphic and behave differently during
beta reduction, due to being able to perform shared computation in between
the passing of the arguments. for instance
f = \x -> let bx = x^10 in \y -> bx + y
when applied to just one argument will share the computation of
calculating the tenth power with all of its uses.
When working with a lazy language you also have the fact that a tuple has
another value, namely the tuple itself being bottom in addition to each of
its components being bottom which means tupled arguments are different than
fully applied curried ones as they can take on more values.
These are all good and useful things, being able to control sharing and
evaluation via proper use of currying is a very powerful tool for many
languages.
John
Post by 'John Nilsson' ***@milsson.nu [concatenative]
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C
When in practice they should all be equal.
My thinking is that this could be addressed by just making them equal so that
(D -> B) (A,B -> C) = (A,D -> C)
Now this gets a little problematic if we have (A,A -> A)
So I suggest that we add bindings to the type and se argument lists not
as ordered tuples, but more like first class environments.
Thus the type above would be (x::A,y::A -> z::A) and composition would
work as the ABCD case above.
To allow composition with arbitrary name we can add a rename type so that
(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)
This would also mean that it is an environment with bindings that gets
threaded through the program, and not a stack.
BR,
John
--
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
--
John Meacham - http://notanumber.net/
--
John Meacham - http://notanumber.net/
John Meacham john@repetae.net [concatenative]
2014-08-14 02:17:51 UTC
Permalink
Lot's of fun stuff to read. thanks.

I guess I was thinking more of the simple inductive reasoning I tend to use
with haskell code that uses bottom as a base case, perhaps this will be
what finally inspires me to learn coq. :)
Post by 'John Nilsson' ***@milsson.nu [concatenative]
Im not sure that is the case. As I understand it it should be possible
even for a total language to reason about infinite structures.
Here's a paper on copatterns
http://www.cs.mcgill.ca/~bpientka/papers/icfp13.pdf on termination and
productivity of those.
Does lazy exclude total btw?
If you do publish something about that language, it would be interesting to read :)
BR,
John
--
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by John Meacham ***@repetae.net [concatenative]
An issue with no bottom is that having it is necessary to properly reason
about infinite data structures. Now, as to whether you actually want
infinite data structures in your language is another issue, but with my
Haskell background I gotta say they are darn handy. :)
I actually have a concatinative lazy language I wrote called 'levity' (a
play on joy) for use internally by a project a while ago. It had some
interesting properties, I should write up the core as it may be interesting
to other fans of concatinative languages even if the code itself is
probably not reusable.
Among other things, it had its symbols in compositional order and was
list based instead of stack based, it just happened that functions
implicitly acted upon the 'current list'. 'f g h' means f(g(h(current
list))).. But by having it in compositional order, something like (1 2 3)
actually ended up being a list in the order 1, 2, 3, and (1 2 2 1 +) ended
up also being (1 2 3). (..) was just syntactic sugar for 'run the symbols
between the parens on an empty stack and take the resulting stack as your
new list.
Due to where I was using the language, as part of a strategy
specification system for a theorem prover, I was able to actually keep it
fully pure in the mathematical sense so laziness was a no-brainer without
side effects to worry about.
John
Post by John Nilsson ***@milsson.nu [concatenative]
I was thinking that composition would also be partial evaluation. So the
benefits of curried functions should be the same.
Regarding evaluation I'm curious if there is something interesting to be
found by looking at kappa calculus or similar first order system instead of
full lambda calculus. Or in any case, aiming for a total language, so no
bottom. Looking around recent papers from various researches it seems to me
that one could get quite far that route. Possibly one has to introduce some
disciplined way to do higher order things sooner or later though.
BR,
John
Post by John Meacham ***@repetae.net [concatenative]
However, they actually are not isomorphic and behave differently during
beta reduction, due to being able to perform shared computation in between
the passing of the arguments. for instance
f = \x -> let bx = x^10 in \y -> bx + y
when applied to just one argument will share the computation of
calculating the tenth power with all of its uses.
When working with a lazy language you also have the fact that a tuple
has another value, namely the tuple itself being bottom in addition to each
of its components being bottom which means tupled arguments are different
than fully applied curried ones as they can take on more values.
These are all good and useful things, being able to control sharing and
evaluation via proper use of currying is a very powerful tool for many
languages.
John
Post by 'John Nilsson' ***@milsson.nu [concatenative]
One problem with currying is that it makes (A,B) -> C != A -> B -> C != B -> A -> C
When in practice they should all be equal.
My thinking is that this could be addressed by just making them equal so that
(D -> B) (A,B -> C) = (A,D -> C)
Now this gets a little problematic if we have (A,A -> A)
So I suggest that we add bindings to the type and se argument lists
not as ordered tuples, but more like first class environments.
Thus the type above would be (x::A,y::A -> z::A) and composition
would work as the ABCD case above.
To allow composition with arbitrary name we can add a rename type so that
(x:a) (x::A,y::A -> z::A) = (a::A,y::A -> z::A)
This would also mean that it is an environment with bindings that gets
threaded through the program, and not a stack.
BR,
John
--
Sent from Mailbox <https://www.dropbox.com/mailbox>
Post by chris glur ***@gmail.com [concatenative]
The 'impedance mismatch' [who introduced that term for
non-ElectricalEngineers?] problem of different numbers of
inputs & outputs of functions, is apparently the motivation
for the idea of passing a SINGLE stack.
And the ideas of 'currying'.
IMO, what matters, is reducing the human mental load.
Hiding N mental-chunks, by wrapping them in a stack
is fraudulent.
Yes, point-free is great. Everything 'popping-out' is just "it".
take it | wash it | cook it| eat it
Without even thinking about the theoretical aspects of
impedance mismatch when multiple input-args are needed,
-> set Arg2
-> set Arg3
Arg1-> DoA -> DoB(A,Arg2) -> DoC(B,Arg3) -> FinalResult.
So you set the extra Args, before the main composition starts,
and the relevant functions know where to get their extra args.
Sure, it's not as neat looking, but the mental-load is less
than wrapping multiple args, to look like 'unary'.
== Chris Glur.
--
John Meacham - http://notanumber.net/
--
John Meacham - http://notanumber.net/
--
John Meacham - http://notanumber.net/
chris glur crglur@gmail.com [concatenative]
2014-08-14 02:31:30 UTC
Permalink
Some important features may not work in this version of your browser, so
you have been redirected to the Basic HTML version. Upgrade to a modern
browser, such as Google Chrome.
==> SCREW YOU !!


Readers of this forum are dispersed over the globe, with different
knowledge/experience backgrounds, using different OSs ......


What [if any] is our common goal?


I came here to find formal-methods of proving-code-correctness,
and found none.


OTOH, perhaps helped along by some ideas from here, I've found great value
in a system that I've been using for the last year, which is essentially
based on, what I understand is the essence of functional-composition:
the ability to build-on existing work [like Darwinian evolution does],
instead of repeatedly starting-from-the-beginning.


Apparently my plain-road-sign notation was not obvious?
"A->B->C->...N"
denotes, that initial data: A; is transformed to data:B, to data:C..
via functions.


For my new-toy [written as a joint token, because it's ONE concept]:
"A" is <a mass of text files which I've accumulated, because over the
years, when I paid to go on line, I didn't "have the vehicle travel with
ONE person. I did: AppendAllURL_ContentsListedIn File1 to File2>.


And stage "N" is the experience of being able to lie-down and have
the texts read to me at my convenience.


Can you believe the BBC's recent claim that UK adults spend more time
[8hr 41min] on <iComs> than sleep ?!


Since this: "A->B->C->...N"; method, has proven so successful [economy
of effort is my criterium], I've asked myself, 'what are its essential
attributes?'


Obviously, the impedance-matching between stages.
BTW, that's not MY introduced term. I found it here.


Also I must admit, that pure/clean "A->B->C->...N", was not achieved.


If a sound-file 'tells something important', how will you know how
to trace back to the original text, of stage "A"?


Instead of putting the ID-of-A, *IN* A;
which is a bit like having unary In/Out parameters,
which is related to the big story about Currying:
I put the ID in a globally accessible file, before I initiate the
journey: "A->B->C->...N".


Just to demonstate the effective economy of effort of the system:
recently, I noticed an annoying:
"bla bla bla *caret caret* bla bla..." in some wikipedia-texts,
fetched at some previous period. Part of their formatting at THAT
time.


Removing the annoying "caret caret", which was the TextToSpeech
of some eg: "the cat sat on the ^ ^ mat", needed only a half-line
of extra code, which constituted one extra stage in the filter-chain.


I don't WANT to remember what the complete chain is, but I'll guess:
* send the text-which is start-end-marked |
* to the sequential stages of filtering out bad-strings |
* do 'translations' [eg. US="you ess" and not "us"] |
* convert the ID.wav file to a more-compact ID.ogg file.


ID is the global variable, which the 'vehicle' picks up towards the
<end of it's journey>.


Since the whole project is not viable without a <visual editor>
like 'wily': you need to be able to visually scan a 60-line screen
of text in under a second, to eg. punctuate header lines, which
'modernly' are not punctuated:
this proves again that computing is NOT about computers, but
rather about human-cognition.


BTW, what happened to the CYC project?
If a project/idea/new-system can become usable at an early stage,
it can be refined over time: Darwinian evolution.
Which is not viable...getting your cows, almost pregnant.


BTW, I use the toy-sized/priced rPi computer to play the
TextToSpeech.


== Chris Glur. Let's see if this crappy gmailer works now!
chris glur crglur@gmail.com [concatenative]
2014-08-15 16:31:01 UTC
Permalink
Re. refining the TextToSpeech system by just adding extra stages of
composition/data-transformation: it was quiet nice to be able to lie down
after a strenuous walk, and listen to Dijkstra's speech.


He told that 'the ALGOL compiler took 1% of the man-hours to write,
compared to the FORTRAN compiler; and that the public view of programming
was that it was a detailed CLERICAL job' ?


Since TTS works so well for me, I really must extend it to be able to
<understand spoken mathematical equations, and even source-code>.


There are blind coders, I believe?


You've gotta love the dirty-hacking facilities of *nix, especially with a
<visual-IDE>. It's so easy [that it should be illegal] to test the
possibilities,
in the text-file:--
X = 4 + Y - F( man and dog) + [3*P +47].
X = 4 + Y - F.
open fun . man and dog . close fun .


What's my point?
Well, with this lazy/dumb compositional style, you can easily make/test/use
filters that fix/handle: parenthesis; lower to upper case; extra pauses via "."
and even <new line>;...etc.


Can anybody point to, even a toy, example of where/how
joy, or other <cat system> helped prove code validity?


== Chris Glur.
'William Tanksley, Jr' wtanksleyjr@gmail.com [concatenative]
2014-08-26 17:23:30 UTC
Permalink
Post by chris glur ***@gmail.com [concatenative]
Readers of this forum are dispersed over the globe, with different
knowledge/experience backgrounds, using different OSs ......
What [if any] is our common goal?
We have an almost impossibly broad challenge, Chris -- we're taking a
practical language that exists in only a few different instances and
discovering (thanks to Manfred von Thun) that there's some mathematics
that describes it. What follows from that?
Post by chris glur ***@gmail.com [concatenative]
I came here to find formal-methods of proving-code-correctness,
and found none.
I think that's an understandable expectation, although I'm not sure
why you think there aren't any -- it may be that you're placing more
demands on a young field than is really appropriate. We're not
geniuses -- or I'm not -- we're just curious people. What we've built
so far is what there is to show. Thousands of people have worked on
other language types; tens of people have worked on concatenative
languages.

As for your language ideas -- try Microsoft Powershell. It has an
amazingly clever way of integrating pipelining, automatic iteration,
and a number of other things. Wolfram's new language is likely to be
even better, and apparently a version of it ships with the rPi (so you
might be able to try it for free). (Have you watched his video?)

Your ideas aren't about concatenative language, though, and I don't
know why you keep posting the same things here. I'm not dissing them
and I'm glad you posted (they're cool ideas), but it's getting
repetitive now.
Post by chris glur ***@gmail.com [concatenative]
== Chris Glur. Let's see if this crappy gmailer works now!
-Wm
chris glur crglur@gmail.com [concatenative]
2014-08-29 01:23:15 UTC
Permalink
As for your language ideas -- try Microsoft Powershell. It has an amazingly
clever way of integrating pipelining, automatic iteration, and a number
of other things. |
Yes, when I mistakenly though that I needed MicroSoft to connect a 3Gdongle,
and bought a M$ device, I got the impression that M$ finally realised the
power [results/effort ratio] of piping and, being late-comers were able to
design cleaner syntax than the disasterous *nix ad-hoc mess.
Wolfram's new language is likely to be even better,..
My recent look into it, suggests it's decades old ?
And, yes exactly for a cleaner syntax is why I HAD investigated it.
Your ideas aren't about concatenative language,
Well you definitely understand them and have been FOLLOWING my same route:
Powershell & Wolfram [rather than eg. biology]. Is that just a coincidence, or
a confirmation that Powershell & Wolfram ARE 'about concatenative language'?


I was again very impressed 'listening'/TTS to <the tunes man>.
Except for beginners: computing is not about computers.
It's about human-cognition.


The value [by economising on human effort] of 'compositional' programming
comes from the ability to re-use the existing code, by just adding/modifying
one of the N-stages, without even understanding/remembering the details
of the other N-1 stages.


== Chris Glur.
'William Tanksley, Jr' wtanksleyjr@gmail.com [concatenative]
2014-09-08 22:20:19 UTC
Permalink
Post by chris glur ***@gmail.com [concatenative]
Post by 'William Tanksley, Jr' ***@gmail.com [concatenative]
Your ideas aren't about concatenative language,
Powershell & Wolfram [rather than eg. biology]. Is that just a coincidence, or
a confirmation that Powershell & Wolfram ARE 'about concatenative language'?
No, it's because I follow language design in general. It really,
really isn't concatenative.
Post by chris glur ***@gmail.com [concatenative]
== Chris Glur.
-Wm
Don Groves dgpdx@comcast.net [concatenative]
2014-09-08 22:23:54 UTC
Permalink
Post by chris glur ***@gmail.com [concatenative]
The value [by economising on human effort] of 'compositional' programming
comes from the ability to re-use the existing code, by just adding/modifying
one of the N-stages, without even understanding/remembering the details
of the other N-1 stages.
== Chris Glur.
Verily so. I'm reminded of the teacher who asked his class to determine the most
efficient path to move an object from the top of his dest to a far upper back corner
of the room. The students mostly got it right. Then he asked them to do the same
for an object placed on the floor beside his desk. The students worked furiously
once more and produced an answer, but they were all incorrect. The correct
answer was to pick up the object and place in on top of the desk.
--
don groves
Loading...